00:00:00.001 Started by upstream project "autotest-nightly" build number 3923 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3298 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.109 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.110 The recommended git tool is: git 00:00:00.111 using credential 00000000-0000-0000-0000-000000000002 00:00:00.114 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.174 Fetching changes from the remote Git repository 00:00:00.176 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.225 Using shallow fetch with depth 1 00:00:00.225 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.225 > git --version # timeout=10 00:00:00.268 > git --version # 'git version 2.39.2' 00:00:00.268 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.289 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.289 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.919 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.929 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.940 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:07.940 > git config core.sparsecheckout # timeout=10 00:00:07.950 > git read-tree -mu HEAD # timeout=10 00:00:07.965 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:07.985 Commit message: "packer: Add bios builder" 00:00:07.985 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:08.073 [Pipeline] Start of Pipeline 00:00:08.085 [Pipeline] library 00:00:08.086 Loading library shm_lib@master 00:00:08.086 Library shm_lib@master is cached. Copying from home. 00:00:08.101 [Pipeline] node 00:00:08.112 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.114 [Pipeline] { 00:00:08.125 [Pipeline] catchError 00:00:08.126 [Pipeline] { 00:00:08.141 [Pipeline] wrap 00:00:08.152 [Pipeline] { 00:00:08.160 [Pipeline] stage 00:00:08.162 [Pipeline] { (Prologue) 00:00:08.340 [Pipeline] sh 00:00:08.621 + logger -p user.info -t JENKINS-CI 00:00:08.639 [Pipeline] echo 00:00:08.640 Node: GP11 00:00:08.645 [Pipeline] sh 00:00:08.940 [Pipeline] setCustomBuildProperty 00:00:08.950 [Pipeline] echo 00:00:08.951 Cleanup processes 00:00:08.955 [Pipeline] sh 00:00:09.239 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.239 423114 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.250 [Pipeline] sh 00:00:09.531 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.531 ++ grep -v 'sudo pgrep' 00:00:09.531 ++ awk '{print $1}' 00:00:09.531 + sudo kill -9 00:00:09.531 + true 00:00:09.548 [Pipeline] cleanWs 00:00:09.559 [WS-CLEANUP] Deleting project workspace... 00:00:09.559 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.566 [WS-CLEANUP] done 00:00:09.571 [Pipeline] setCustomBuildProperty 00:00:09.587 [Pipeline] sh 00:00:09.872 + sudo git config --global --replace-all safe.directory '*' 00:00:09.962 [Pipeline] httpRequest 00:00:10.001 [Pipeline] echo 00:00:10.003 Sorcerer 10.211.164.101 is alive 00:00:10.012 [Pipeline] httpRequest 00:00:10.017 HttpMethod: GET 00:00:10.018 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:10.018 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:10.030 Response Code: HTTP/1.1 200 OK 00:00:10.030 Success: Status code 200 is in the accepted range: 200,404 00:00:10.031 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:16.423 [Pipeline] sh 00:00:16.712 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:16.733 [Pipeline] httpRequest 00:00:16.766 [Pipeline] echo 00:00:16.768 Sorcerer 10.211.164.101 is alive 00:00:16.778 [Pipeline] httpRequest 00:00:16.783 HttpMethod: GET 00:00:16.784 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:16.784 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:16.804 Response Code: HTTP/1.1 200 OK 00:00:16.805 Success: Status code 200 is in the accepted range: 200,404 00:00:16.806 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:12.056 [Pipeline] sh 00:01:12.403 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:15.714 [Pipeline] sh 00:01:15.999 + git -C spdk log --oneline -n5 00:01:16.000 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:16.000 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:16.000 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:16.000 d005e023b raid: fix empty slot not updated in sb after resize 00:01:16.000 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:01:16.012 [Pipeline] } 00:01:16.027 [Pipeline] // stage 00:01:16.036 [Pipeline] stage 00:01:16.038 [Pipeline] { (Prepare) 00:01:16.054 [Pipeline] writeFile 00:01:16.071 [Pipeline] sh 00:01:16.356 + logger -p user.info -t JENKINS-CI 00:01:16.369 [Pipeline] sh 00:01:16.654 + logger -p user.info -t JENKINS-CI 00:01:16.666 [Pipeline] sh 00:01:16.951 + cat autorun-spdk.conf 00:01:16.951 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.951 SPDK_TEST_NVMF=1 00:01:16.951 SPDK_TEST_NVME_CLI=1 00:01:16.951 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.951 SPDK_TEST_NVMF_NICS=e810 00:01:16.951 SPDK_RUN_ASAN=1 00:01:16.951 SPDK_RUN_UBSAN=1 00:01:16.951 NET_TYPE=phy 00:01:16.959 RUN_NIGHTLY=1 00:01:16.963 [Pipeline] readFile 00:01:16.987 [Pipeline] withEnv 00:01:16.989 [Pipeline] { 00:01:17.002 [Pipeline] sh 00:01:17.288 + set -ex 00:01:17.288 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:17.288 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:17.288 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.288 ++ SPDK_TEST_NVMF=1 00:01:17.288 ++ SPDK_TEST_NVME_CLI=1 00:01:17.288 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.288 ++ SPDK_TEST_NVMF_NICS=e810 00:01:17.288 ++ SPDK_RUN_ASAN=1 00:01:17.288 ++ SPDK_RUN_UBSAN=1 00:01:17.288 ++ NET_TYPE=phy 00:01:17.288 ++ RUN_NIGHTLY=1 00:01:17.288 + case $SPDK_TEST_NVMF_NICS in 00:01:17.288 + DRIVERS=ice 00:01:17.288 + [[ tcp == \r\d\m\a ]] 00:01:17.288 + [[ -n ice ]] 00:01:17.288 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:17.288 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:17.288 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:17.288 rmmod: ERROR: Module irdma is not currently loaded 00:01:17.288 rmmod: ERROR: Module i40iw is not currently loaded 00:01:17.288 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:17.288 + true 00:01:17.288 + for D in $DRIVERS 00:01:17.288 + sudo modprobe ice 00:01:17.288 + exit 0 00:01:17.298 [Pipeline] } 00:01:17.314 [Pipeline] // withEnv 00:01:17.319 [Pipeline] } 00:01:17.334 [Pipeline] // stage 00:01:17.342 [Pipeline] catchError 00:01:17.344 [Pipeline] { 00:01:17.358 [Pipeline] timeout 00:01:17.358 Timeout set to expire in 50 min 00:01:17.360 [Pipeline] { 00:01:17.373 [Pipeline] stage 00:01:17.375 [Pipeline] { (Tests) 00:01:17.389 [Pipeline] sh 00:01:17.675 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.675 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.675 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.675 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:17.675 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.675 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:17.675 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:17.675 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:17.675 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:17.675 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:17.675 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:17.675 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.675 + source /etc/os-release 00:01:17.675 ++ NAME='Fedora Linux' 00:01:17.675 ++ VERSION='38 (Cloud Edition)' 00:01:17.675 ++ ID=fedora 00:01:17.675 ++ VERSION_ID=38 00:01:17.675 ++ VERSION_CODENAME= 00:01:17.675 ++ PLATFORM_ID=platform:f38 00:01:17.675 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:17.675 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:17.675 ++ LOGO=fedora-logo-icon 00:01:17.675 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:17.675 ++ HOME_URL=https://fedoraproject.org/ 00:01:17.675 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:17.675 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:17.675 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:17.675 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:17.675 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:17.675 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:17.675 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:17.675 ++ SUPPORT_END=2024-05-14 00:01:17.675 ++ VARIANT='Cloud Edition' 00:01:17.675 ++ VARIANT_ID=cloud 00:01:17.675 + uname -a 00:01:17.675 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:17.675 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:18.614 Hugepages 00:01:18.614 node hugesize free / total 00:01:18.614 node0 1048576kB 0 / 0 00:01:18.614 node0 2048kB 0 / 0 00:01:18.614 node1 1048576kB 0 / 0 00:01:18.614 node1 2048kB 0 / 0 00:01:18.614 00:01:18.614 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:18.614 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:18.614 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:18.614 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:18.614 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:18.614 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:18.614 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:18.614 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:18.614 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:18.614 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:18.614 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:18.614 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:18.614 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:18.614 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:18.614 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:18.614 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:18.614 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:18.873 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:18.873 + rm -f /tmp/spdk-ld-path 00:01:18.873 + source autorun-spdk.conf 00:01:18.873 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.873 ++ SPDK_TEST_NVMF=1 00:01:18.873 ++ SPDK_TEST_NVME_CLI=1 00:01:18.873 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.873 ++ SPDK_TEST_NVMF_NICS=e810 00:01:18.873 ++ SPDK_RUN_ASAN=1 00:01:18.873 ++ SPDK_RUN_UBSAN=1 00:01:18.873 ++ NET_TYPE=phy 00:01:18.873 ++ RUN_NIGHTLY=1 00:01:18.873 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:18.873 + [[ -n '' ]] 00:01:18.873 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.873 + for M in /var/spdk/build-*-manifest.txt 00:01:18.873 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:18.873 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.873 + for M in /var/spdk/build-*-manifest.txt 00:01:18.873 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:18.873 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.873 ++ uname 00:01:18.873 + [[ Linux == \L\i\n\u\x ]] 00:01:18.873 + sudo dmesg -T 00:01:18.873 + sudo dmesg --clear 00:01:18.873 + dmesg_pid=424411 00:01:18.873 + [[ Fedora Linux == FreeBSD ]] 00:01:18.873 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.873 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.873 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:18.873 + sudo dmesg -Tw 00:01:18.874 + [[ -x /usr/src/fio-static/fio ]] 00:01:18.874 + export FIO_BIN=/usr/src/fio-static/fio 00:01:18.874 + FIO_BIN=/usr/src/fio-static/fio 00:01:18.874 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:18.874 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:18.874 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:18.874 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.874 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.874 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:18.874 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.874 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.874 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.874 Test configuration: 00:01:18.874 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.874 SPDK_TEST_NVMF=1 00:01:18.874 SPDK_TEST_NVME_CLI=1 00:01:18.874 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.874 SPDK_TEST_NVMF_NICS=e810 00:01:18.874 SPDK_RUN_ASAN=1 00:01:18.874 SPDK_RUN_UBSAN=1 00:01:18.874 NET_TYPE=phy 00:01:18.874 RUN_NIGHTLY=1 16:06:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:18.874 16:06:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:18.874 16:06:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:18.874 16:06:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:18.874 16:06:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.874 16:06:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.874 16:06:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.874 16:06:38 -- paths/export.sh@5 -- $ export PATH 00:01:18.874 16:06:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.874 16:06:38 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:18.874 16:06:38 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:18.874 16:06:38 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1722002798.XXXXXX 00:01:18.874 16:06:38 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1722002798.hU1pTA 00:01:18.874 16:06:38 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:18.874 16:06:38 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:18.874 16:06:38 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:18.874 16:06:38 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:18.874 16:06:38 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:18.874 16:06:38 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:18.874 16:06:38 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:18.874 16:06:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.874 16:06:38 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:18.874 16:06:38 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:18.874 16:06:38 -- pm/common@17 -- $ local monitor 00:01:18.874 16:06:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.874 16:06:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.874 16:06:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.874 16:06:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.874 16:06:38 -- pm/common@21 -- $ date +%s 00:01:18.874 16:06:38 -- pm/common@21 -- $ date +%s 00:01:18.874 16:06:38 -- pm/common@25 -- $ sleep 1 00:01:18.874 16:06:38 -- pm/common@21 -- $ date +%s 00:01:18.874 16:06:38 -- pm/common@21 -- $ date +%s 00:01:18.874 16:06:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722002798 00:01:18.874 16:06:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722002798 00:01:18.874 16:06:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722002798 00:01:18.874 16:06:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722002798 00:01:18.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722002798_collect-vmstat.pm.log 00:01:18.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722002798_collect-cpu-load.pm.log 00:01:18.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722002798_collect-cpu-temp.pm.log 00:01:18.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722002798_collect-bmc-pm.bmc.pm.log 00:01:19.816 16:06:39 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:19.816 16:06:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:19.816 16:06:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:19.816 16:06:39 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.816 16:06:39 -- spdk/autobuild.sh@16 -- $ date -u 00:01:19.816 Fri Jul 26 02:06:39 PM UTC 2024 00:01:19.816 16:06:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:20.075 v24.09-pre-321-g704257090 00:01:20.075 16:06:39 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:20.075 16:06:39 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:20.075 16:06:39 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:20.075 16:06:39 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:20.075 16:06:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.075 ************************************ 00:01:20.075 START TEST asan 00:01:20.075 ************************************ 00:01:20.075 16:06:39 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:20.075 using asan 00:01:20.075 00:01:20.075 real 0m0.000s 00:01:20.075 user 0m0.000s 00:01:20.075 sys 0m0.000s 00:01:20.075 16:06:39 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:20.075 16:06:39 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.075 ************************************ 00:01:20.075 END TEST asan 00:01:20.075 ************************************ 00:01:20.075 16:06:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:20.075 16:06:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:20.075 16:06:39 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:20.075 16:06:39 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:20.075 16:06:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.075 ************************************ 00:01:20.075 START TEST ubsan 00:01:20.075 ************************************ 00:01:20.075 16:06:39 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:20.075 using ubsan 00:01:20.075 00:01:20.075 real 0m0.000s 00:01:20.075 user 0m0.000s 00:01:20.075 sys 0m0.000s 00:01:20.075 16:06:39 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:20.075 16:06:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.075 ************************************ 00:01:20.075 END TEST ubsan 00:01:20.075 ************************************ 00:01:20.075 16:06:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:20.075 16:06:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:20.075 16:06:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:20.075 16:06:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:20.075 16:06:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:20.075 16:06:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:20.075 16:06:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:20.075 16:06:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:20.075 16:06:39 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:20.075 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:20.075 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:20.333 Using 'verbs' RDMA provider 00:01:31.263 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:41.252 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:41.252 Creating mk/config.mk...done. 00:01:41.252 Creating mk/cc.flags.mk...done. 00:01:41.252 Type 'make' to build. 00:01:41.252 16:07:00 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:41.252 16:07:00 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:41.252 16:07:00 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:41.252 16:07:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.252 ************************************ 00:01:41.252 START TEST make 00:01:41.252 ************************************ 00:01:41.252 16:07:00 make -- common/autotest_common.sh@1125 -- $ make -j48 00:01:41.252 make[1]: Nothing to be done for 'all'. 00:01:49.425 The Meson build system 00:01:49.425 Version: 1.3.1 00:01:49.425 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:49.425 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:49.425 Build type: native build 00:01:49.425 Program cat found: YES (/usr/bin/cat) 00:01:49.425 Project name: DPDK 00:01:49.425 Project version: 24.03.0 00:01:49.425 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:49.425 C linker for the host machine: cc ld.bfd 2.39-16 00:01:49.425 Host machine cpu family: x86_64 00:01:49.425 Host machine cpu: x86_64 00:01:49.425 Message: ## Building in Developer Mode ## 00:01:49.425 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:49.425 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:49.425 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:49.425 Program python3 found: YES (/usr/bin/python3) 00:01:49.425 Program cat found: YES (/usr/bin/cat) 00:01:49.425 Compiler for C supports arguments -march=native: YES 00:01:49.425 Checking for size of "void *" : 8 00:01:49.425 Checking for size of "void *" : 8 (cached) 00:01:49.425 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:49.425 Library m found: YES 00:01:49.425 Library numa found: YES 00:01:49.425 Has header "numaif.h" : YES 00:01:49.425 Library fdt found: NO 00:01:49.425 Library execinfo found: NO 00:01:49.425 Has header "execinfo.h" : YES 00:01:49.425 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:49.425 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:49.425 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:49.425 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:49.425 Run-time dependency openssl found: YES 3.0.9 00:01:49.425 Run-time dependency libpcap found: YES 1.10.4 00:01:49.425 Has header "pcap.h" with dependency libpcap: YES 00:01:49.425 Compiler for C supports arguments -Wcast-qual: YES 00:01:49.425 Compiler for C supports arguments -Wdeprecated: YES 00:01:49.425 Compiler for C supports arguments -Wformat: YES 00:01:49.425 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:49.425 Compiler for C supports arguments -Wformat-security: NO 00:01:49.425 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:49.425 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:49.425 Compiler for C supports arguments -Wnested-externs: YES 00:01:49.425 Compiler for C supports arguments -Wold-style-definition: YES 00:01:49.425 Compiler for C supports arguments -Wpointer-arith: YES 00:01:49.425 Compiler for C supports arguments -Wsign-compare: YES 00:01:49.425 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:49.425 Compiler for C supports arguments -Wundef: YES 00:01:49.425 Compiler for C supports arguments -Wwrite-strings: YES 00:01:49.425 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:49.425 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:49.425 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:49.425 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:49.425 Program objdump found: YES (/usr/bin/objdump) 00:01:49.425 Compiler for C supports arguments -mavx512f: YES 00:01:49.425 Checking if "AVX512 checking" compiles: YES 00:01:49.425 Fetching value of define "__SSE4_2__" : 1 00:01:49.425 Fetching value of define "__AES__" : 1 00:01:49.425 Fetching value of define "__AVX__" : 1 00:01:49.425 Fetching value of define "__AVX2__" : (undefined) 00:01:49.425 Fetching value of define "__AVX512BW__" : (undefined) 00:01:49.425 Fetching value of define "__AVX512CD__" : (undefined) 00:01:49.425 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:49.425 Fetching value of define "__AVX512F__" : (undefined) 00:01:49.425 Fetching value of define "__AVX512VL__" : (undefined) 00:01:49.425 Fetching value of define "__PCLMUL__" : 1 00:01:49.425 Fetching value of define "__RDRND__" : 1 00:01:49.425 Fetching value of define "__RDSEED__" : (undefined) 00:01:49.425 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:49.425 Fetching value of define "__znver1__" : (undefined) 00:01:49.425 Fetching value of define "__znver2__" : (undefined) 00:01:49.425 Fetching value of define "__znver3__" : (undefined) 00:01:49.425 Fetching value of define "__znver4__" : (undefined) 00:01:49.425 Library asan found: YES 00:01:49.425 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:49.425 Message: lib/log: Defining dependency "log" 00:01:49.425 Message: lib/kvargs: Defining dependency "kvargs" 00:01:49.425 Message: lib/telemetry: Defining dependency "telemetry" 00:01:49.425 Library rt found: YES 00:01:49.425 Checking for function "getentropy" : NO 00:01:49.425 Message: lib/eal: Defining dependency "eal" 00:01:49.425 Message: lib/ring: Defining dependency "ring" 00:01:49.425 Message: lib/rcu: Defining dependency "rcu" 00:01:49.425 Message: lib/mempool: Defining dependency "mempool" 00:01:49.425 Message: lib/mbuf: Defining dependency "mbuf" 00:01:49.425 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:49.425 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.425 Compiler for C supports arguments -mpclmul: YES 00:01:49.425 Compiler for C supports arguments -maes: YES 00:01:49.425 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:49.425 Compiler for C supports arguments -mavx512bw: YES 00:01:49.425 Compiler for C supports arguments -mavx512dq: YES 00:01:49.425 Compiler for C supports arguments -mavx512vl: YES 00:01:49.425 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:49.425 Compiler for C supports arguments -mavx2: YES 00:01:49.425 Compiler for C supports arguments -mavx: YES 00:01:49.425 Message: lib/net: Defining dependency "net" 00:01:49.425 Message: lib/meter: Defining dependency "meter" 00:01:49.425 Message: lib/ethdev: Defining dependency "ethdev" 00:01:49.425 Message: lib/pci: Defining dependency "pci" 00:01:49.425 Message: lib/cmdline: Defining dependency "cmdline" 00:01:49.425 Message: lib/hash: Defining dependency "hash" 00:01:49.425 Message: lib/timer: Defining dependency "timer" 00:01:49.425 Message: lib/compressdev: Defining dependency "compressdev" 00:01:49.425 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:49.425 Message: lib/dmadev: Defining dependency "dmadev" 00:01:49.425 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:49.425 Message: lib/power: Defining dependency "power" 00:01:49.425 Message: lib/reorder: Defining dependency "reorder" 00:01:49.425 Message: lib/security: Defining dependency "security" 00:01:49.425 Has header "linux/userfaultfd.h" : YES 00:01:49.425 Has header "linux/vduse.h" : YES 00:01:49.425 Message: lib/vhost: Defining dependency "vhost" 00:01:49.425 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:49.426 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:49.426 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:49.426 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:49.426 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:49.426 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:49.426 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:49.426 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:49.426 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:49.426 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:49.426 Program doxygen found: YES (/usr/bin/doxygen) 00:01:49.426 Configuring doxy-api-html.conf using configuration 00:01:49.426 Configuring doxy-api-man.conf using configuration 00:01:49.426 Program mandb found: YES (/usr/bin/mandb) 00:01:49.426 Program sphinx-build found: NO 00:01:49.426 Configuring rte_build_config.h using configuration 00:01:49.426 Message: 00:01:49.426 ================= 00:01:49.426 Applications Enabled 00:01:49.426 ================= 00:01:49.426 00:01:49.426 apps: 00:01:49.426 00:01:49.426 00:01:49.426 Message: 00:01:49.426 ================= 00:01:49.426 Libraries Enabled 00:01:49.426 ================= 00:01:49.426 00:01:49.426 libs: 00:01:49.426 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:49.426 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:49.426 cryptodev, dmadev, power, reorder, security, vhost, 00:01:49.426 00:01:49.426 Message: 00:01:49.426 =============== 00:01:49.426 Drivers Enabled 00:01:49.426 =============== 00:01:49.426 00:01:49.426 common: 00:01:49.426 00:01:49.426 bus: 00:01:49.426 pci, vdev, 00:01:49.426 mempool: 00:01:49.426 ring, 00:01:49.426 dma: 00:01:49.426 00:01:49.426 net: 00:01:49.426 00:01:49.426 crypto: 00:01:49.426 00:01:49.426 compress: 00:01:49.426 00:01:49.426 vdpa: 00:01:49.426 00:01:49.426 00:01:49.426 Message: 00:01:49.426 ================= 00:01:49.426 Content Skipped 00:01:49.426 ================= 00:01:49.426 00:01:49.426 apps: 00:01:49.426 dumpcap: explicitly disabled via build config 00:01:49.426 graph: explicitly disabled via build config 00:01:49.426 pdump: explicitly disabled via build config 00:01:49.426 proc-info: explicitly disabled via build config 00:01:49.426 test-acl: explicitly disabled via build config 00:01:49.426 test-bbdev: explicitly disabled via build config 00:01:49.426 test-cmdline: explicitly disabled via build config 00:01:49.426 test-compress-perf: explicitly disabled via build config 00:01:49.426 test-crypto-perf: explicitly disabled via build config 00:01:49.426 test-dma-perf: explicitly disabled via build config 00:01:49.426 test-eventdev: explicitly disabled via build config 00:01:49.426 test-fib: explicitly disabled via build config 00:01:49.426 test-flow-perf: explicitly disabled via build config 00:01:49.426 test-gpudev: explicitly disabled via build config 00:01:49.426 test-mldev: explicitly disabled via build config 00:01:49.426 test-pipeline: explicitly disabled via build config 00:01:49.426 test-pmd: explicitly disabled via build config 00:01:49.426 test-regex: explicitly disabled via build config 00:01:49.426 test-sad: explicitly disabled via build config 00:01:49.426 test-security-perf: explicitly disabled via build config 00:01:49.426 00:01:49.426 libs: 00:01:49.426 argparse: explicitly disabled via build config 00:01:49.426 metrics: explicitly disabled via build config 00:01:49.426 acl: explicitly disabled via build config 00:01:49.426 bbdev: explicitly disabled via build config 00:01:49.426 bitratestats: explicitly disabled via build config 00:01:49.426 bpf: explicitly disabled via build config 00:01:49.426 cfgfile: explicitly disabled via build config 00:01:49.426 distributor: explicitly disabled via build config 00:01:49.426 efd: explicitly disabled via build config 00:01:49.426 eventdev: explicitly disabled via build config 00:01:49.426 dispatcher: explicitly disabled via build config 00:01:49.426 gpudev: explicitly disabled via build config 00:01:49.426 gro: explicitly disabled via build config 00:01:49.426 gso: explicitly disabled via build config 00:01:49.426 ip_frag: explicitly disabled via build config 00:01:49.426 jobstats: explicitly disabled via build config 00:01:49.426 latencystats: explicitly disabled via build config 00:01:49.426 lpm: explicitly disabled via build config 00:01:49.426 member: explicitly disabled via build config 00:01:49.426 pcapng: explicitly disabled via build config 00:01:49.426 rawdev: explicitly disabled via build config 00:01:49.426 regexdev: explicitly disabled via build config 00:01:49.426 mldev: explicitly disabled via build config 00:01:49.426 rib: explicitly disabled via build config 00:01:49.426 sched: explicitly disabled via build config 00:01:49.426 stack: explicitly disabled via build config 00:01:49.426 ipsec: explicitly disabled via build config 00:01:49.426 pdcp: explicitly disabled via build config 00:01:49.426 fib: explicitly disabled via build config 00:01:49.426 port: explicitly disabled via build config 00:01:49.426 pdump: explicitly disabled via build config 00:01:49.426 table: explicitly disabled via build config 00:01:49.426 pipeline: explicitly disabled via build config 00:01:49.426 graph: explicitly disabled via build config 00:01:49.426 node: explicitly disabled via build config 00:01:49.426 00:01:49.426 drivers: 00:01:49.426 common/cpt: not in enabled drivers build config 00:01:49.426 common/dpaax: not in enabled drivers build config 00:01:49.426 common/iavf: not in enabled drivers build config 00:01:49.426 common/idpf: not in enabled drivers build config 00:01:49.426 common/ionic: not in enabled drivers build config 00:01:49.426 common/mvep: not in enabled drivers build config 00:01:49.426 common/octeontx: not in enabled drivers build config 00:01:49.426 bus/auxiliary: not in enabled drivers build config 00:01:49.426 bus/cdx: not in enabled drivers build config 00:01:49.426 bus/dpaa: not in enabled drivers build config 00:01:49.426 bus/fslmc: not in enabled drivers build config 00:01:49.426 bus/ifpga: not in enabled drivers build config 00:01:49.426 bus/platform: not in enabled drivers build config 00:01:49.426 bus/uacce: not in enabled drivers build config 00:01:49.426 bus/vmbus: not in enabled drivers build config 00:01:49.426 common/cnxk: not in enabled drivers build config 00:01:49.426 common/mlx5: not in enabled drivers build config 00:01:49.426 common/nfp: not in enabled drivers build config 00:01:49.426 common/nitrox: not in enabled drivers build config 00:01:49.426 common/qat: not in enabled drivers build config 00:01:49.426 common/sfc_efx: not in enabled drivers build config 00:01:49.426 mempool/bucket: not in enabled drivers build config 00:01:49.426 mempool/cnxk: not in enabled drivers build config 00:01:49.426 mempool/dpaa: not in enabled drivers build config 00:01:49.426 mempool/dpaa2: not in enabled drivers build config 00:01:49.426 mempool/octeontx: not in enabled drivers build config 00:01:49.426 mempool/stack: not in enabled drivers build config 00:01:49.426 dma/cnxk: not in enabled drivers build config 00:01:49.426 dma/dpaa: not in enabled drivers build config 00:01:49.426 dma/dpaa2: not in enabled drivers build config 00:01:49.426 dma/hisilicon: not in enabled drivers build config 00:01:49.426 dma/idxd: not in enabled drivers build config 00:01:49.426 dma/ioat: not in enabled drivers build config 00:01:49.426 dma/skeleton: not in enabled drivers build config 00:01:49.426 net/af_packet: not in enabled drivers build config 00:01:49.426 net/af_xdp: not in enabled drivers build config 00:01:49.426 net/ark: not in enabled drivers build config 00:01:49.426 net/atlantic: not in enabled drivers build config 00:01:49.426 net/avp: not in enabled drivers build config 00:01:49.426 net/axgbe: not in enabled drivers build config 00:01:49.426 net/bnx2x: not in enabled drivers build config 00:01:49.426 net/bnxt: not in enabled drivers build config 00:01:49.426 net/bonding: not in enabled drivers build config 00:01:49.426 net/cnxk: not in enabled drivers build config 00:01:49.426 net/cpfl: not in enabled drivers build config 00:01:49.426 net/cxgbe: not in enabled drivers build config 00:01:49.426 net/dpaa: not in enabled drivers build config 00:01:49.426 net/dpaa2: not in enabled drivers build config 00:01:49.426 net/e1000: not in enabled drivers build config 00:01:49.426 net/ena: not in enabled drivers build config 00:01:49.426 net/enetc: not in enabled drivers build config 00:01:49.426 net/enetfec: not in enabled drivers build config 00:01:49.426 net/enic: not in enabled drivers build config 00:01:49.426 net/failsafe: not in enabled drivers build config 00:01:49.426 net/fm10k: not in enabled drivers build config 00:01:49.426 net/gve: not in enabled drivers build config 00:01:49.426 net/hinic: not in enabled drivers build config 00:01:49.426 net/hns3: not in enabled drivers build config 00:01:49.426 net/i40e: not in enabled drivers build config 00:01:49.426 net/iavf: not in enabled drivers build config 00:01:49.426 net/ice: not in enabled drivers build config 00:01:49.426 net/idpf: not in enabled drivers build config 00:01:49.426 net/igc: not in enabled drivers build config 00:01:49.426 net/ionic: not in enabled drivers build config 00:01:49.426 net/ipn3ke: not in enabled drivers build config 00:01:49.426 net/ixgbe: not in enabled drivers build config 00:01:49.426 net/mana: not in enabled drivers build config 00:01:49.426 net/memif: not in enabled drivers build config 00:01:49.426 net/mlx4: not in enabled drivers build config 00:01:49.426 net/mlx5: not in enabled drivers build config 00:01:49.426 net/mvneta: not in enabled drivers build config 00:01:49.426 net/mvpp2: not in enabled drivers build config 00:01:49.426 net/netvsc: not in enabled drivers build config 00:01:49.426 net/nfb: not in enabled drivers build config 00:01:49.426 net/nfp: not in enabled drivers build config 00:01:49.426 net/ngbe: not in enabled drivers build config 00:01:49.426 net/null: not in enabled drivers build config 00:01:49.426 net/octeontx: not in enabled drivers build config 00:01:49.427 net/octeon_ep: not in enabled drivers build config 00:01:49.427 net/pcap: not in enabled drivers build config 00:01:49.427 net/pfe: not in enabled drivers build config 00:01:49.427 net/qede: not in enabled drivers build config 00:01:49.427 net/ring: not in enabled drivers build config 00:01:49.427 net/sfc: not in enabled drivers build config 00:01:49.427 net/softnic: not in enabled drivers build config 00:01:49.427 net/tap: not in enabled drivers build config 00:01:49.427 net/thunderx: not in enabled drivers build config 00:01:49.427 net/txgbe: not in enabled drivers build config 00:01:49.427 net/vdev_netvsc: not in enabled drivers build config 00:01:49.427 net/vhost: not in enabled drivers build config 00:01:49.427 net/virtio: not in enabled drivers build config 00:01:49.427 net/vmxnet3: not in enabled drivers build config 00:01:49.427 raw/*: missing internal dependency, "rawdev" 00:01:49.427 crypto/armv8: not in enabled drivers build config 00:01:49.427 crypto/bcmfs: not in enabled drivers build config 00:01:49.427 crypto/caam_jr: not in enabled drivers build config 00:01:49.427 crypto/ccp: not in enabled drivers build config 00:01:49.427 crypto/cnxk: not in enabled drivers build config 00:01:49.427 crypto/dpaa_sec: not in enabled drivers build config 00:01:49.427 crypto/dpaa2_sec: not in enabled drivers build config 00:01:49.427 crypto/ipsec_mb: not in enabled drivers build config 00:01:49.427 crypto/mlx5: not in enabled drivers build config 00:01:49.427 crypto/mvsam: not in enabled drivers build config 00:01:49.427 crypto/nitrox: not in enabled drivers build config 00:01:49.427 crypto/null: not in enabled drivers build config 00:01:49.427 crypto/octeontx: not in enabled drivers build config 00:01:49.427 crypto/openssl: not in enabled drivers build config 00:01:49.427 crypto/scheduler: not in enabled drivers build config 00:01:49.427 crypto/uadk: not in enabled drivers build config 00:01:49.427 crypto/virtio: not in enabled drivers build config 00:01:49.427 compress/isal: not in enabled drivers build config 00:01:49.427 compress/mlx5: not in enabled drivers build config 00:01:49.427 compress/nitrox: not in enabled drivers build config 00:01:49.427 compress/octeontx: not in enabled drivers build config 00:01:49.427 compress/zlib: not in enabled drivers build config 00:01:49.427 regex/*: missing internal dependency, "regexdev" 00:01:49.427 ml/*: missing internal dependency, "mldev" 00:01:49.427 vdpa/ifc: not in enabled drivers build config 00:01:49.427 vdpa/mlx5: not in enabled drivers build config 00:01:49.427 vdpa/nfp: not in enabled drivers build config 00:01:49.427 vdpa/sfc: not in enabled drivers build config 00:01:49.427 event/*: missing internal dependency, "eventdev" 00:01:49.427 baseband/*: missing internal dependency, "bbdev" 00:01:49.427 gpu/*: missing internal dependency, "gpudev" 00:01:49.427 00:01:49.427 00:01:49.427 Build targets in project: 85 00:01:49.427 00:01:49.427 DPDK 24.03.0 00:01:49.427 00:01:49.427 User defined options 00:01:49.427 buildtype : debug 00:01:49.427 default_library : shared 00:01:49.427 libdir : lib 00:01:49.427 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:49.427 b_sanitize : address 00:01:49.427 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:49.427 c_link_args : 00:01:49.427 cpu_instruction_set: native 00:01:49.427 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:49.427 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:49.427 enable_docs : false 00:01:49.427 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:49.427 enable_kmods : false 00:01:49.427 max_lcores : 128 00:01:49.427 tests : false 00:01:49.427 00:01:49.427 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.999 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:49.999 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:49.999 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:49.999 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:49.999 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:49.999 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:49.999 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:49.999 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:49.999 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:49.999 [9/268] Linking static target lib/librte_kvargs.a 00:01:49.999 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:49.999 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.265 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:50.265 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:50.265 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:50.265 [15/268] Linking static target lib/librte_log.a 00:01:50.265 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:50.836 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.836 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:50.836 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:50.836 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:50.836 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:50.836 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:50.836 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:50.836 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:50.836 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:50.836 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:50.836 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:50.836 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:50.836 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.098 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.098 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.098 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.098 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.098 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.098 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.098 [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.098 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.098 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.098 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.098 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.098 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.098 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.098 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.098 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:51.098 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:51.098 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:51.098 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.098 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.098 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.098 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.098 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.098 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:51.098 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.098 [54/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.098 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:51.098 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:51.098 [57/268] Linking static target lib/librte_telemetry.a 00:01:51.098 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.366 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:51.366 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:51.366 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:51.366 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.366 [63/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.627 [64/268] Linking target lib/librte_log.so.24.1 00:01:51.627 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:51.627 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.898 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:51.898 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:51.898 [69/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:51.898 [70/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:51.898 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.898 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:51.898 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:51.898 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:51.898 [75/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:51.898 [76/268] Linking target lib/librte_kvargs.so.24.1 00:01:51.898 [77/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:51.898 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.898 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:51.898 [80/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:51.898 [81/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:51.898 [82/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.898 [83/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:51.898 [84/268] Linking static target lib/librte_ring.a 00:01:52.161 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.161 [86/268] Linking static target lib/librte_pci.a 00:01:52.161 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.161 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:52.161 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:52.161 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:52.161 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.161 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.161 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:52.161 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:52.161 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:52.161 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:52.161 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.161 [98/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:52.161 [99/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.161 [100/268] Linking static target lib/librte_meter.a 00:01:52.161 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.161 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.161 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.161 [104/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.161 [105/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:52.161 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.161 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.161 [108/268] Linking target lib/librte_telemetry.so.24.1 00:01:52.161 [109/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.161 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.161 [111/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:52.161 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.422 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.422 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.422 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.422 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:52.422 [117/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.422 [118/268] Linking static target lib/librte_mempool.a 00:01:52.422 [119/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.422 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.422 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.422 [122/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.422 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.687 [124/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:52.687 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.687 [126/268] Linking static target lib/librte_rcu.a 00:01:52.687 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.687 [128/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.687 [129/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.687 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:52.687 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.687 [132/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.687 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.947 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.947 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.947 [136/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.947 [137/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:52.947 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.947 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.947 [140/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.947 [141/268] Linking static target lib/librte_cmdline.a 00:01:52.947 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.947 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:52.947 [144/268] Linking static target lib/librte_eal.a 00:01:53.210 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:53.210 [146/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:53.210 [147/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:53.210 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:53.210 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:53.210 [150/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:53.210 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:53.210 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:53.210 [153/268] Linking static target lib/librte_timer.a 00:01:53.210 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:53.210 [155/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.210 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:53.471 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:53.471 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:53.471 [159/268] Linking static target lib/librte_dmadev.a 00:01:53.471 [160/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.729 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:53.729 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.729 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:53.729 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:53.729 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:53.729 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:53.729 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:53.987 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:53.987 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:53.987 [170/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:53.987 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:53.987 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.987 [173/268] Linking static target lib/librte_net.a 00:01:53.987 [174/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.987 [175/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:53.987 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:53.987 [177/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:53.987 [178/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:53.987 [179/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:53.987 [180/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:53.987 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:53.987 [182/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:53.987 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:53.987 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:53.987 [185/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:53.987 [186/268] Linking static target lib/librte_power.a 00:01:54.246 [187/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:54.246 [188/268] Linking static target lib/librte_compressdev.a 00:01:54.246 [189/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.246 [190/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:54.246 [191/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.246 [192/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.246 [193/268] Linking static target drivers/librte_bus_vdev.a 00:01:54.246 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:54.246 [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:54.246 [196/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.246 [197/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.247 [198/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.247 [199/268] Linking static target drivers/librte_bus_pci.a 00:01:54.247 [200/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:54.506 [201/268] Linking static target lib/librte_hash.a 00:01:54.506 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:54.506 [203/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:54.506 [204/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.506 [205/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.506 [206/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.506 [207/268] Linking static target drivers/librte_mempool_ring.a 00:01:54.506 [208/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.506 [209/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:54.506 [210/268] Linking static target lib/librte_reorder.a 00:01:54.506 [211/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.764 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:54.764 [213/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.764 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.022 [215/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.280 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:55.280 [217/268] Linking static target lib/librte_security.a 00:01:55.848 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:55.848 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.414 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:56.414 [221/268] Linking static target lib/librte_mbuf.a 00:01:56.673 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:56.673 [223/268] Linking static target lib/librte_cryptodev.a 00:01:56.673 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.608 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:57.608 [226/268] Linking static target lib/librte_ethdev.a 00:01:57.608 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.983 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.983 [229/268] Linking target lib/librte_eal.so.24.1 00:01:59.242 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:59.242 [231/268] Linking target lib/librte_meter.so.24.1 00:01:59.242 [232/268] Linking target lib/librte_timer.so.24.1 00:01:59.242 [233/268] Linking target lib/librte_ring.so.24.1 00:01:59.242 [234/268] Linking target lib/librte_pci.so.24.1 00:01:59.242 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:59.242 [236/268] Linking target lib/librte_dmadev.so.24.1 00:01:59.242 [237/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:59.242 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:59.242 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:59.242 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:59.242 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:59.501 [242/268] Linking target lib/librte_rcu.so.24.1 00:01:59.501 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:59.501 [244/268] Linking target lib/librte_mempool.so.24.1 00:01:59.501 [245/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:59.501 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:59.501 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:59.501 [248/268] Linking target lib/librte_mbuf.so.24.1 00:01:59.760 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:59.760 [250/268] Linking target lib/librte_net.so.24.1 00:01:59.760 [251/268] Linking target lib/librte_reorder.so.24.1 00:01:59.760 [252/268] Linking target lib/librte_compressdev.so.24.1 00:01:59.760 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:01:59.760 [254/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:59.760 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:00.018 [256/268] Linking target lib/librte_security.so.24.1 00:02:00.018 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:00.018 [258/268] Linking target lib/librte_hash.so.24.1 00:02:00.018 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:00.585 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:01.993 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.993 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:01.993 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:01.993 [264/268] Linking target lib/librte_power.so.24.1 00:02:23.934 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:23.934 [266/268] Linking static target lib/librte_vhost.a 00:02:24.192 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.192 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:24.192 INFO: autodetecting backend as ninja 00:02:24.192 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:25.124 CC lib/log/log.o 00:02:25.124 CC lib/log/log_flags.o 00:02:25.124 CC lib/ut/ut.o 00:02:25.124 CC lib/log/log_deprecated.o 00:02:25.124 CC lib/ut_mock/mock.o 00:02:25.382 LIB libspdk_log.a 00:02:25.382 LIB libspdk_ut.a 00:02:25.382 LIB libspdk_ut_mock.a 00:02:25.382 SO libspdk_ut.so.2.0 00:02:25.382 SO libspdk_ut_mock.so.6.0 00:02:25.382 SO libspdk_log.so.7.0 00:02:25.382 SYMLINK libspdk_ut_mock.so 00:02:25.382 SYMLINK libspdk_ut.so 00:02:25.382 SYMLINK libspdk_log.so 00:02:25.641 CXX lib/trace_parser/trace.o 00:02:25.641 CC lib/ioat/ioat.o 00:02:25.641 CC lib/dma/dma.o 00:02:25.641 CC lib/util/base64.o 00:02:25.641 CC lib/util/bit_array.o 00:02:25.641 CC lib/util/cpuset.o 00:02:25.641 CC lib/util/crc16.o 00:02:25.641 CC lib/util/crc32.o 00:02:25.641 CC lib/util/crc32c.o 00:02:25.641 CC lib/util/crc32_ieee.o 00:02:25.641 CC lib/util/crc64.o 00:02:25.641 CC lib/util/dif.o 00:02:25.641 CC lib/util/fd.o 00:02:25.641 CC lib/util/fd_group.o 00:02:25.641 CC lib/util/file.o 00:02:25.641 CC lib/util/hexlify.o 00:02:25.641 CC lib/util/iov.o 00:02:25.641 CC lib/util/math.o 00:02:25.641 CC lib/util/net.o 00:02:25.641 CC lib/util/pipe.o 00:02:25.641 CC lib/util/strerror_tls.o 00:02:25.641 CC lib/util/string.o 00:02:25.641 CC lib/util/uuid.o 00:02:25.641 CC lib/util/xor.o 00:02:25.641 CC lib/util/zipf.o 00:02:25.899 CC lib/vfio_user/host/vfio_user_pci.o 00:02:25.899 CC lib/vfio_user/host/vfio_user.o 00:02:25.899 LIB libspdk_dma.a 00:02:25.899 SO libspdk_dma.so.4.0 00:02:25.899 LIB libspdk_ioat.a 00:02:25.899 SYMLINK libspdk_dma.so 00:02:26.157 SO libspdk_ioat.so.7.0 00:02:26.157 SYMLINK libspdk_ioat.so 00:02:26.157 LIB libspdk_vfio_user.a 00:02:26.157 SO libspdk_vfio_user.so.5.0 00:02:26.157 SYMLINK libspdk_vfio_user.so 00:02:26.415 LIB libspdk_util.a 00:02:26.415 SO libspdk_util.so.10.0 00:02:26.673 SYMLINK libspdk_util.so 00:02:26.673 LIB libspdk_trace_parser.a 00:02:26.673 CC lib/rdma_provider/common.o 00:02:26.673 CC lib/json/json_parse.o 00:02:26.673 CC lib/conf/conf.o 00:02:26.673 CC lib/idxd/idxd.o 00:02:26.673 CC lib/env_dpdk/env.o 00:02:26.673 CC lib/vmd/vmd.o 00:02:26.673 CC lib/rdma_utils/rdma_utils.o 00:02:26.673 CC lib/json/json_util.o 00:02:26.673 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:26.673 CC lib/vmd/led.o 00:02:26.673 CC lib/idxd/idxd_user.o 00:02:26.673 CC lib/json/json_write.o 00:02:26.673 CC lib/env_dpdk/memory.o 00:02:26.673 CC lib/env_dpdk/pci.o 00:02:26.673 CC lib/idxd/idxd_kernel.o 00:02:26.673 CC lib/env_dpdk/init.o 00:02:26.673 CC lib/env_dpdk/threads.o 00:02:26.673 CC lib/env_dpdk/pci_ioat.o 00:02:26.673 CC lib/env_dpdk/pci_virtio.o 00:02:26.673 CC lib/env_dpdk/pci_vmd.o 00:02:26.673 CC lib/env_dpdk/pci_event.o 00:02:26.673 CC lib/env_dpdk/pci_idxd.o 00:02:26.673 CC lib/env_dpdk/sigbus_handler.o 00:02:26.673 CC lib/env_dpdk/pci_dpdk.o 00:02:26.673 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:26.673 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:26.673 SO libspdk_trace_parser.so.5.0 00:02:26.931 SYMLINK libspdk_trace_parser.so 00:02:26.931 LIB libspdk_rdma_provider.a 00:02:26.931 SO libspdk_rdma_provider.so.6.0 00:02:26.931 LIB libspdk_conf.a 00:02:27.189 SO libspdk_conf.so.6.0 00:02:27.189 SYMLINK libspdk_rdma_provider.so 00:02:27.189 LIB libspdk_rdma_utils.a 00:02:27.189 SYMLINK libspdk_conf.so 00:02:27.189 SO libspdk_rdma_utils.so.1.0 00:02:27.189 LIB libspdk_json.a 00:02:27.189 SO libspdk_json.so.6.0 00:02:27.189 SYMLINK libspdk_rdma_utils.so 00:02:27.189 SYMLINK libspdk_json.so 00:02:27.447 CC lib/jsonrpc/jsonrpc_server.o 00:02:27.447 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:27.447 CC lib/jsonrpc/jsonrpc_client.o 00:02:27.447 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:27.447 LIB libspdk_idxd.a 00:02:27.705 SO libspdk_idxd.so.12.0 00:02:27.705 SYMLINK libspdk_idxd.so 00:02:27.705 LIB libspdk_vmd.a 00:02:27.705 SO libspdk_vmd.so.6.0 00:02:27.705 LIB libspdk_jsonrpc.a 00:02:27.705 SO libspdk_jsonrpc.so.6.0 00:02:27.705 SYMLINK libspdk_vmd.so 00:02:27.705 SYMLINK libspdk_jsonrpc.so 00:02:27.963 CC lib/rpc/rpc.o 00:02:28.221 LIB libspdk_rpc.a 00:02:28.221 SO libspdk_rpc.so.6.0 00:02:28.221 SYMLINK libspdk_rpc.so 00:02:28.478 CC lib/trace/trace.o 00:02:28.478 CC lib/notify/notify.o 00:02:28.478 CC lib/trace/trace_flags.o 00:02:28.478 CC lib/keyring/keyring.o 00:02:28.478 CC lib/notify/notify_rpc.o 00:02:28.478 CC lib/trace/trace_rpc.o 00:02:28.478 CC lib/keyring/keyring_rpc.o 00:02:28.478 LIB libspdk_notify.a 00:02:28.736 SO libspdk_notify.so.6.0 00:02:28.736 SYMLINK libspdk_notify.so 00:02:28.736 LIB libspdk_keyring.a 00:02:28.736 SO libspdk_keyring.so.1.0 00:02:28.736 LIB libspdk_trace.a 00:02:28.736 SO libspdk_trace.so.10.0 00:02:28.736 SYMLINK libspdk_keyring.so 00:02:28.736 SYMLINK libspdk_trace.so 00:02:28.993 CC lib/thread/thread.o 00:02:28.993 CC lib/sock/sock.o 00:02:28.993 CC lib/sock/sock_rpc.o 00:02:28.993 CC lib/thread/iobuf.o 00:02:29.559 LIB libspdk_sock.a 00:02:29.559 SO libspdk_sock.so.10.0 00:02:29.559 SYMLINK libspdk_sock.so 00:02:29.559 LIB libspdk_env_dpdk.a 00:02:29.817 SO libspdk_env_dpdk.so.15.0 00:02:29.817 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:29.817 CC lib/nvme/nvme_ctrlr.o 00:02:29.817 CC lib/nvme/nvme_fabric.o 00:02:29.817 CC lib/nvme/nvme_ns_cmd.o 00:02:29.817 CC lib/nvme/nvme_ns.o 00:02:29.817 CC lib/nvme/nvme_pcie_common.o 00:02:29.817 CC lib/nvme/nvme_pcie.o 00:02:29.817 CC lib/nvme/nvme_qpair.o 00:02:29.817 CC lib/nvme/nvme.o 00:02:29.817 CC lib/nvme/nvme_quirks.o 00:02:29.817 CC lib/nvme/nvme_transport.o 00:02:29.817 CC lib/nvme/nvme_discovery.o 00:02:29.817 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:29.817 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:29.817 CC lib/nvme/nvme_tcp.o 00:02:29.817 CC lib/nvme/nvme_opal.o 00:02:29.817 CC lib/nvme/nvme_io_msg.o 00:02:29.817 CC lib/nvme/nvme_poll_group.o 00:02:29.817 CC lib/nvme/nvme_zns.o 00:02:29.817 CC lib/nvme/nvme_stubs.o 00:02:29.817 CC lib/nvme/nvme_auth.o 00:02:29.817 CC lib/nvme/nvme_cuse.o 00:02:29.817 CC lib/nvme/nvme_rdma.o 00:02:29.817 SYMLINK libspdk_env_dpdk.so 00:02:31.192 LIB libspdk_thread.a 00:02:31.192 SO libspdk_thread.so.10.1 00:02:31.192 SYMLINK libspdk_thread.so 00:02:31.450 CC lib/blob/blobstore.o 00:02:31.450 CC lib/blob/request.o 00:02:31.450 CC lib/accel/accel.o 00:02:31.450 CC lib/blob/zeroes.o 00:02:31.450 CC lib/virtio/virtio.o 00:02:31.450 CC lib/init/json_config.o 00:02:31.450 CC lib/accel/accel_rpc.o 00:02:31.450 CC lib/blob/blob_bs_dev.o 00:02:31.450 CC lib/init/subsystem.o 00:02:31.450 CC lib/virtio/virtio_vhost_user.o 00:02:31.450 CC lib/virtio/virtio_vfio_user.o 00:02:31.450 CC lib/accel/accel_sw.o 00:02:31.450 CC lib/init/subsystem_rpc.o 00:02:31.450 CC lib/virtio/virtio_pci.o 00:02:31.450 CC lib/init/rpc.o 00:02:31.708 LIB libspdk_init.a 00:02:31.708 SO libspdk_init.so.5.0 00:02:31.708 SYMLINK libspdk_init.so 00:02:31.708 LIB libspdk_virtio.a 00:02:31.708 SO libspdk_virtio.so.7.0 00:02:31.966 SYMLINK libspdk_virtio.so 00:02:31.966 CC lib/event/app.o 00:02:31.966 CC lib/event/reactor.o 00:02:31.966 CC lib/event/app_rpc.o 00:02:31.966 CC lib/event/log_rpc.o 00:02:31.966 CC lib/event/scheduler_static.o 00:02:32.533 LIB libspdk_event.a 00:02:32.533 SO libspdk_event.so.14.0 00:02:32.533 SYMLINK libspdk_event.so 00:02:32.533 LIB libspdk_accel.a 00:02:32.533 SO libspdk_accel.so.16.0 00:02:32.792 LIB libspdk_nvme.a 00:02:32.792 SYMLINK libspdk_accel.so 00:02:32.792 SO libspdk_nvme.so.13.1 00:02:32.792 CC lib/bdev/bdev.o 00:02:32.792 CC lib/bdev/bdev_rpc.o 00:02:32.792 CC lib/bdev/bdev_zone.o 00:02:32.792 CC lib/bdev/part.o 00:02:32.792 CC lib/bdev/scsi_nvme.o 00:02:33.051 SYMLINK libspdk_nvme.so 00:02:35.585 LIB libspdk_blob.a 00:02:35.585 SO libspdk_blob.so.11.0 00:02:35.585 SYMLINK libspdk_blob.so 00:02:35.844 CC lib/blobfs/blobfs.o 00:02:35.844 CC lib/blobfs/tree.o 00:02:35.844 CC lib/lvol/lvol.o 00:02:36.103 LIB libspdk_bdev.a 00:02:36.103 SO libspdk_bdev.so.16.0 00:02:36.367 SYMLINK libspdk_bdev.so 00:02:36.367 CC lib/nbd/nbd.o 00:02:36.367 CC lib/nvmf/ctrlr.o 00:02:36.367 CC lib/scsi/dev.o 00:02:36.367 CC lib/ublk/ublk.o 00:02:36.367 CC lib/nbd/nbd_rpc.o 00:02:36.367 CC lib/scsi/lun.o 00:02:36.367 CC lib/ublk/ublk_rpc.o 00:02:36.367 CC lib/nvmf/ctrlr_discovery.o 00:02:36.367 CC lib/scsi/port.o 00:02:36.367 CC lib/ftl/ftl_core.o 00:02:36.367 CC lib/nvmf/ctrlr_bdev.o 00:02:36.367 CC lib/scsi/scsi.o 00:02:36.367 CC lib/nvmf/subsystem.o 00:02:36.367 CC lib/ftl/ftl_init.o 00:02:36.367 CC lib/scsi/scsi_bdev.o 00:02:36.367 CC lib/nvmf/nvmf.o 00:02:36.367 CC lib/ftl/ftl_layout.o 00:02:36.367 CC lib/scsi/scsi_pr.o 00:02:36.367 CC lib/nvmf/nvmf_rpc.o 00:02:36.367 CC lib/ftl/ftl_debug.o 00:02:36.367 CC lib/nvmf/transport.o 00:02:36.367 CC lib/scsi/scsi_rpc.o 00:02:36.367 CC lib/ftl/ftl_io.o 00:02:36.367 CC lib/scsi/task.o 00:02:36.367 CC lib/nvmf/tcp.o 00:02:36.367 CC lib/ftl/ftl_sb.o 00:02:36.367 CC lib/ftl/ftl_l2p.o 00:02:36.367 CC lib/nvmf/stubs.o 00:02:36.367 CC lib/nvmf/mdns_server.o 00:02:36.367 CC lib/ftl/ftl_l2p_flat.o 00:02:36.367 CC lib/nvmf/rdma.o 00:02:36.367 CC lib/ftl/ftl_nv_cache.o 00:02:36.367 CC lib/nvmf/auth.o 00:02:36.367 CC lib/ftl/ftl_band.o 00:02:36.367 CC lib/ftl/ftl_band_ops.o 00:02:36.367 CC lib/ftl/ftl_writer.o 00:02:36.367 CC lib/ftl/ftl_rq.o 00:02:36.367 CC lib/ftl/ftl_reloc.o 00:02:36.367 CC lib/ftl/ftl_l2p_cache.o 00:02:36.367 CC lib/ftl/ftl_p2l.o 00:02:36.367 CC lib/ftl/mngt/ftl_mngt.o 00:02:36.367 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:36.367 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:36.367 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:36.367 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:36.367 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:36.943 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:36.943 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:36.943 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:36.943 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:36.943 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:36.943 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:36.943 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:36.943 CC lib/ftl/utils/ftl_conf.o 00:02:36.943 CC lib/ftl/utils/ftl_md.o 00:02:36.943 CC lib/ftl/utils/ftl_mempool.o 00:02:36.943 CC lib/ftl/utils/ftl_bitmap.o 00:02:36.943 CC lib/ftl/utils/ftl_property.o 00:02:36.943 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:36.943 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:36.943 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:36.943 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:37.201 LIB libspdk_blobfs.a 00:02:37.201 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:37.201 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:37.201 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:37.201 SO libspdk_blobfs.so.10.0 00:02:37.201 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:37.201 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:37.201 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:37.201 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:37.201 CC lib/ftl/base/ftl_base_dev.o 00:02:37.201 CC lib/ftl/base/ftl_base_bdev.o 00:02:37.201 SYMLINK libspdk_blobfs.so 00:02:37.201 CC lib/ftl/ftl_trace.o 00:02:37.460 LIB libspdk_lvol.a 00:02:37.460 SO libspdk_lvol.so.10.0 00:02:37.460 LIB libspdk_nbd.a 00:02:37.460 SO libspdk_nbd.so.7.0 00:02:37.460 SYMLINK libspdk_lvol.so 00:02:37.460 SYMLINK libspdk_nbd.so 00:02:37.718 LIB libspdk_scsi.a 00:02:37.718 SO libspdk_scsi.so.9.0 00:02:37.718 LIB libspdk_ublk.a 00:02:37.718 SO libspdk_ublk.so.3.0 00:02:37.718 SYMLINK libspdk_ublk.so 00:02:37.718 SYMLINK libspdk_scsi.so 00:02:37.976 CC lib/iscsi/conn.o 00:02:37.976 CC lib/iscsi/init_grp.o 00:02:37.976 CC lib/vhost/vhost_rpc.o 00:02:37.976 CC lib/vhost/vhost.o 00:02:37.976 CC lib/iscsi/iscsi.o 00:02:37.976 CC lib/vhost/vhost_scsi.o 00:02:37.976 CC lib/iscsi/md5.o 00:02:37.976 CC lib/vhost/vhost_blk.o 00:02:37.976 CC lib/iscsi/param.o 00:02:37.976 CC lib/vhost/rte_vhost_user.o 00:02:37.976 CC lib/iscsi/portal_grp.o 00:02:37.976 CC lib/iscsi/tgt_node.o 00:02:37.976 CC lib/iscsi/iscsi_subsystem.o 00:02:37.976 CC lib/iscsi/iscsi_rpc.o 00:02:37.976 CC lib/iscsi/task.o 00:02:38.234 LIB libspdk_ftl.a 00:02:38.491 SO libspdk_ftl.so.9.0 00:02:38.749 SYMLINK libspdk_ftl.so 00:02:39.317 LIB libspdk_vhost.a 00:02:39.317 SO libspdk_vhost.so.8.0 00:02:39.577 SYMLINK libspdk_vhost.so 00:02:39.834 LIB libspdk_iscsi.a 00:02:39.834 SO libspdk_iscsi.so.8.0 00:02:39.834 LIB libspdk_nvmf.a 00:02:39.834 SO libspdk_nvmf.so.19.0 00:02:40.092 SYMLINK libspdk_iscsi.so 00:02:40.092 SYMLINK libspdk_nvmf.so 00:02:40.350 CC module/env_dpdk/env_dpdk_rpc.o 00:02:40.608 CC module/accel/dsa/accel_dsa.o 00:02:40.608 CC module/scheduler/gscheduler/gscheduler.o 00:02:40.608 CC module/sock/posix/posix.o 00:02:40.608 CC module/accel/dsa/accel_dsa_rpc.o 00:02:40.608 CC module/keyring/file/keyring.o 00:02:40.608 CC module/blob/bdev/blob_bdev.o 00:02:40.608 CC module/keyring/file/keyring_rpc.o 00:02:40.608 CC module/accel/ioat/accel_ioat.o 00:02:40.608 CC module/accel/ioat/accel_ioat_rpc.o 00:02:40.608 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:40.608 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:40.608 CC module/accel/error/accel_error.o 00:02:40.608 CC module/accel/iaa/accel_iaa.o 00:02:40.608 CC module/keyring/linux/keyring.o 00:02:40.608 CC module/accel/error/accel_error_rpc.o 00:02:40.608 CC module/accel/iaa/accel_iaa_rpc.o 00:02:40.608 CC module/keyring/linux/keyring_rpc.o 00:02:40.608 LIB libspdk_env_dpdk_rpc.a 00:02:40.608 SO libspdk_env_dpdk_rpc.so.6.0 00:02:40.608 SYMLINK libspdk_env_dpdk_rpc.so 00:02:40.608 LIB libspdk_keyring_linux.a 00:02:40.608 LIB libspdk_keyring_file.a 00:02:40.608 LIB libspdk_scheduler_gscheduler.a 00:02:40.608 LIB libspdk_scheduler_dpdk_governor.a 00:02:40.866 SO libspdk_keyring_linux.so.1.0 00:02:40.866 SO libspdk_keyring_file.so.1.0 00:02:40.866 SO libspdk_scheduler_gscheduler.so.4.0 00:02:40.866 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:40.866 LIB libspdk_accel_error.a 00:02:40.866 LIB libspdk_accel_ioat.a 00:02:40.866 LIB libspdk_scheduler_dynamic.a 00:02:40.866 LIB libspdk_accel_iaa.a 00:02:40.866 SO libspdk_accel_error.so.2.0 00:02:40.866 SO libspdk_accel_ioat.so.6.0 00:02:40.866 SYMLINK libspdk_keyring_linux.so 00:02:40.866 SYMLINK libspdk_keyring_file.so 00:02:40.866 SYMLINK libspdk_scheduler_gscheduler.so 00:02:40.866 SO libspdk_scheduler_dynamic.so.4.0 00:02:40.866 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:40.867 SO libspdk_accel_iaa.so.3.0 00:02:40.867 SYMLINK libspdk_accel_ioat.so 00:02:40.867 SYMLINK libspdk_accel_error.so 00:02:40.867 LIB libspdk_accel_dsa.a 00:02:40.867 SYMLINK libspdk_scheduler_dynamic.so 00:02:40.867 SYMLINK libspdk_accel_iaa.so 00:02:40.867 LIB libspdk_blob_bdev.a 00:02:40.867 SO libspdk_accel_dsa.so.5.0 00:02:40.867 SO libspdk_blob_bdev.so.11.0 00:02:40.867 SYMLINK libspdk_accel_dsa.so 00:02:40.867 SYMLINK libspdk_blob_bdev.so 00:02:41.125 CC module/blobfs/bdev/blobfs_bdev.o 00:02:41.125 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:41.125 CC module/bdev/delay/vbdev_delay.o 00:02:41.125 CC module/bdev/malloc/bdev_malloc.o 00:02:41.125 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:41.125 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:41.125 CC module/bdev/lvol/vbdev_lvol.o 00:02:41.125 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:41.125 CC module/bdev/gpt/gpt.o 00:02:41.125 CC module/bdev/gpt/vbdev_gpt.o 00:02:41.125 CC module/bdev/error/vbdev_error.o 00:02:41.125 CC module/bdev/null/bdev_null.o 00:02:41.125 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:41.125 CC module/bdev/error/vbdev_error_rpc.o 00:02:41.125 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:41.125 CC module/bdev/passthru/vbdev_passthru.o 00:02:41.125 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:41.125 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:41.125 CC module/bdev/iscsi/bdev_iscsi.o 00:02:41.125 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:41.125 CC module/bdev/null/bdev_null_rpc.o 00:02:41.125 CC module/bdev/raid/bdev_raid.o 00:02:41.125 CC module/bdev/nvme/bdev_nvme.o 00:02:41.125 CC module/bdev/ftl/bdev_ftl.o 00:02:41.125 CC module/bdev/aio/bdev_aio.o 00:02:41.125 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:41.125 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:41.125 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:41.125 CC module/bdev/raid/bdev_raid_rpc.o 00:02:41.125 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:41.125 CC module/bdev/aio/bdev_aio_rpc.o 00:02:41.125 CC module/bdev/nvme/nvme_rpc.o 00:02:41.125 CC module/bdev/split/vbdev_split.o 00:02:41.125 CC module/bdev/raid/bdev_raid_sb.o 00:02:41.125 CC module/bdev/nvme/bdev_mdns_client.o 00:02:41.125 CC module/bdev/split/vbdev_split_rpc.o 00:02:41.125 CC module/bdev/raid/raid0.o 00:02:41.125 CC module/bdev/nvme/vbdev_opal.o 00:02:41.125 CC module/bdev/raid/raid1.o 00:02:41.125 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:41.125 CC module/bdev/raid/concat.o 00:02:41.125 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:41.692 LIB libspdk_sock_posix.a 00:02:41.692 LIB libspdk_blobfs_bdev.a 00:02:41.692 SO libspdk_blobfs_bdev.so.6.0 00:02:41.692 SO libspdk_sock_posix.so.6.0 00:02:41.692 LIB libspdk_bdev_split.a 00:02:41.692 LIB libspdk_bdev_gpt.a 00:02:41.692 SO libspdk_bdev_split.so.6.0 00:02:41.692 SO libspdk_bdev_gpt.so.6.0 00:02:41.692 SYMLINK libspdk_blobfs_bdev.so 00:02:41.692 SYMLINK libspdk_sock_posix.so 00:02:41.692 LIB libspdk_bdev_null.a 00:02:41.692 SYMLINK libspdk_bdev_split.so 00:02:41.692 SO libspdk_bdev_null.so.6.0 00:02:41.692 SYMLINK libspdk_bdev_gpt.so 00:02:41.692 LIB libspdk_bdev_error.a 00:02:41.692 LIB libspdk_bdev_aio.a 00:02:41.692 LIB libspdk_bdev_ftl.a 00:02:41.692 LIB libspdk_bdev_passthru.a 00:02:41.692 SYMLINK libspdk_bdev_null.so 00:02:41.692 SO libspdk_bdev_error.so.6.0 00:02:41.692 SO libspdk_bdev_aio.so.6.0 00:02:41.692 SO libspdk_bdev_ftl.so.6.0 00:02:41.692 SO libspdk_bdev_passthru.so.6.0 00:02:41.951 LIB libspdk_bdev_zone_block.a 00:02:41.951 LIB libspdk_bdev_malloc.a 00:02:41.951 LIB libspdk_bdev_iscsi.a 00:02:41.951 SO libspdk_bdev_zone_block.so.6.0 00:02:41.951 SYMLINK libspdk_bdev_error.so 00:02:41.951 SO libspdk_bdev_malloc.so.6.0 00:02:41.951 SO libspdk_bdev_iscsi.so.6.0 00:02:41.951 SYMLINK libspdk_bdev_aio.so 00:02:41.951 SYMLINK libspdk_bdev_ftl.so 00:02:41.951 SYMLINK libspdk_bdev_passthru.so 00:02:41.951 LIB libspdk_bdev_delay.a 00:02:41.951 SYMLINK libspdk_bdev_zone_block.so 00:02:41.951 SYMLINK libspdk_bdev_malloc.so 00:02:41.951 SYMLINK libspdk_bdev_iscsi.so 00:02:41.951 SO libspdk_bdev_delay.so.6.0 00:02:41.951 SYMLINK libspdk_bdev_delay.so 00:02:42.209 LIB libspdk_bdev_lvol.a 00:02:42.209 LIB libspdk_bdev_virtio.a 00:02:42.209 SO libspdk_bdev_lvol.so.6.0 00:02:42.209 SO libspdk_bdev_virtio.so.6.0 00:02:42.209 SYMLINK libspdk_bdev_lvol.so 00:02:42.209 SYMLINK libspdk_bdev_virtio.so 00:02:42.774 LIB libspdk_bdev_raid.a 00:02:42.774 SO libspdk_bdev_raid.so.6.0 00:02:42.774 SYMLINK libspdk_bdev_raid.so 00:02:44.158 LIB libspdk_bdev_nvme.a 00:02:44.158 SO libspdk_bdev_nvme.so.7.0 00:02:44.416 SYMLINK libspdk_bdev_nvme.so 00:02:44.674 CC module/event/subsystems/keyring/keyring.o 00:02:44.674 CC module/event/subsystems/scheduler/scheduler.o 00:02:44.674 CC module/event/subsystems/iobuf/iobuf.o 00:02:44.674 CC module/event/subsystems/vmd/vmd.o 00:02:44.674 CC module/event/subsystems/sock/sock.o 00:02:44.675 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:44.675 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:44.675 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:44.675 LIB libspdk_event_keyring.a 00:02:44.675 LIB libspdk_event_vhost_blk.a 00:02:44.675 LIB libspdk_event_scheduler.a 00:02:44.675 LIB libspdk_event_vmd.a 00:02:44.675 LIB libspdk_event_sock.a 00:02:44.933 SO libspdk_event_keyring.so.1.0 00:02:44.933 LIB libspdk_event_iobuf.a 00:02:44.933 SO libspdk_event_vhost_blk.so.3.0 00:02:44.933 SO libspdk_event_scheduler.so.4.0 00:02:44.933 SO libspdk_event_sock.so.5.0 00:02:44.933 SO libspdk_event_vmd.so.6.0 00:02:44.933 SO libspdk_event_iobuf.so.3.0 00:02:44.933 SYMLINK libspdk_event_keyring.so 00:02:44.933 SYMLINK libspdk_event_vhost_blk.so 00:02:44.933 SYMLINK libspdk_event_sock.so 00:02:44.933 SYMLINK libspdk_event_scheduler.so 00:02:44.933 SYMLINK libspdk_event_vmd.so 00:02:44.933 SYMLINK libspdk_event_iobuf.so 00:02:45.191 CC module/event/subsystems/accel/accel.o 00:02:45.191 LIB libspdk_event_accel.a 00:02:45.191 SO libspdk_event_accel.so.6.0 00:02:45.191 SYMLINK libspdk_event_accel.so 00:02:45.449 CC module/event/subsystems/bdev/bdev.o 00:02:45.707 LIB libspdk_event_bdev.a 00:02:45.707 SO libspdk_event_bdev.so.6.0 00:02:45.707 SYMLINK libspdk_event_bdev.so 00:02:45.965 CC module/event/subsystems/ublk/ublk.o 00:02:45.965 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:45.965 CC module/event/subsystems/nbd/nbd.o 00:02:45.965 CC module/event/subsystems/scsi/scsi.o 00:02:45.965 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:45.965 LIB libspdk_event_nbd.a 00:02:45.965 LIB libspdk_event_ublk.a 00:02:45.965 LIB libspdk_event_scsi.a 00:02:45.965 SO libspdk_event_nbd.so.6.0 00:02:45.965 SO libspdk_event_ublk.so.3.0 00:02:46.224 SO libspdk_event_scsi.so.6.0 00:02:46.224 SYMLINK libspdk_event_nbd.so 00:02:46.224 SYMLINK libspdk_event_ublk.so 00:02:46.224 SYMLINK libspdk_event_scsi.so 00:02:46.224 LIB libspdk_event_nvmf.a 00:02:46.224 SO libspdk_event_nvmf.so.6.0 00:02:46.224 SYMLINK libspdk_event_nvmf.so 00:02:46.224 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:46.224 CC module/event/subsystems/iscsi/iscsi.o 00:02:46.482 LIB libspdk_event_vhost_scsi.a 00:02:46.482 LIB libspdk_event_iscsi.a 00:02:46.482 SO libspdk_event_vhost_scsi.so.3.0 00:02:46.482 SO libspdk_event_iscsi.so.6.0 00:02:46.482 SYMLINK libspdk_event_vhost_scsi.so 00:02:46.482 SYMLINK libspdk_event_iscsi.so 00:02:46.740 SO libspdk.so.6.0 00:02:46.740 SYMLINK libspdk.so 00:02:46.740 CXX app/trace/trace.o 00:02:46.740 CC app/spdk_top/spdk_top.o 00:02:46.740 CC app/spdk_nvme_perf/perf.o 00:02:46.740 CC app/spdk_nvme_identify/identify.o 00:02:46.740 CC app/spdk_lspci/spdk_lspci.o 00:02:46.740 TEST_HEADER include/spdk/accel.h 00:02:46.740 CC test/rpc_client/rpc_client_test.o 00:02:46.740 CC app/trace_record/trace_record.o 00:02:46.740 TEST_HEADER include/spdk/accel_module.h 00:02:46.740 TEST_HEADER include/spdk/assert.h 00:02:46.740 CC app/spdk_nvme_discover/discovery_aer.o 00:02:46.740 TEST_HEADER include/spdk/barrier.h 00:02:46.740 TEST_HEADER include/spdk/base64.h 00:02:46.740 TEST_HEADER include/spdk/bdev.h 00:02:46.740 TEST_HEADER include/spdk/bdev_module.h 00:02:46.741 TEST_HEADER include/spdk/bdev_zone.h 00:02:46.741 TEST_HEADER include/spdk/bit_array.h 00:02:46.741 TEST_HEADER include/spdk/bit_pool.h 00:02:46.741 TEST_HEADER include/spdk/blob_bdev.h 00:02:46.741 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:46.741 TEST_HEADER include/spdk/blobfs.h 00:02:47.006 TEST_HEADER include/spdk/blob.h 00:02:47.006 TEST_HEADER include/spdk/conf.h 00:02:47.006 TEST_HEADER include/spdk/config.h 00:02:47.006 TEST_HEADER include/spdk/cpuset.h 00:02:47.006 TEST_HEADER include/spdk/crc16.h 00:02:47.006 TEST_HEADER include/spdk/crc32.h 00:02:47.006 TEST_HEADER include/spdk/crc64.h 00:02:47.006 TEST_HEADER include/spdk/dif.h 00:02:47.006 TEST_HEADER include/spdk/dma.h 00:02:47.006 TEST_HEADER include/spdk/endian.h 00:02:47.006 TEST_HEADER include/spdk/env_dpdk.h 00:02:47.006 TEST_HEADER include/spdk/env.h 00:02:47.006 TEST_HEADER include/spdk/event.h 00:02:47.006 TEST_HEADER include/spdk/fd_group.h 00:02:47.006 TEST_HEADER include/spdk/fd.h 00:02:47.006 TEST_HEADER include/spdk/file.h 00:02:47.006 TEST_HEADER include/spdk/ftl.h 00:02:47.006 TEST_HEADER include/spdk/gpt_spec.h 00:02:47.006 TEST_HEADER include/spdk/hexlify.h 00:02:47.006 TEST_HEADER include/spdk/histogram_data.h 00:02:47.006 TEST_HEADER include/spdk/idxd.h 00:02:47.006 TEST_HEADER include/spdk/idxd_spec.h 00:02:47.006 TEST_HEADER include/spdk/init.h 00:02:47.006 TEST_HEADER include/spdk/ioat_spec.h 00:02:47.006 TEST_HEADER include/spdk/ioat.h 00:02:47.006 TEST_HEADER include/spdk/iscsi_spec.h 00:02:47.006 TEST_HEADER include/spdk/jsonrpc.h 00:02:47.006 TEST_HEADER include/spdk/json.h 00:02:47.006 TEST_HEADER include/spdk/keyring_module.h 00:02:47.006 TEST_HEADER include/spdk/keyring.h 00:02:47.006 TEST_HEADER include/spdk/likely.h 00:02:47.006 TEST_HEADER include/spdk/log.h 00:02:47.006 TEST_HEADER include/spdk/lvol.h 00:02:47.006 TEST_HEADER include/spdk/memory.h 00:02:47.006 TEST_HEADER include/spdk/mmio.h 00:02:47.006 TEST_HEADER include/spdk/nbd.h 00:02:47.006 TEST_HEADER include/spdk/net.h 00:02:47.006 TEST_HEADER include/spdk/notify.h 00:02:47.006 TEST_HEADER include/spdk/nvme.h 00:02:47.006 TEST_HEADER include/spdk/nvme_intel.h 00:02:47.006 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:47.006 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:47.006 TEST_HEADER include/spdk/nvme_zns.h 00:02:47.006 TEST_HEADER include/spdk/nvme_spec.h 00:02:47.006 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:47.006 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:47.006 TEST_HEADER include/spdk/nvmf_spec.h 00:02:47.006 TEST_HEADER include/spdk/nvmf.h 00:02:47.006 TEST_HEADER include/spdk/nvmf_transport.h 00:02:47.006 TEST_HEADER include/spdk/opal.h 00:02:47.006 TEST_HEADER include/spdk/opal_spec.h 00:02:47.006 TEST_HEADER include/spdk/pci_ids.h 00:02:47.006 TEST_HEADER include/spdk/pipe.h 00:02:47.006 TEST_HEADER include/spdk/queue.h 00:02:47.006 TEST_HEADER include/spdk/reduce.h 00:02:47.006 TEST_HEADER include/spdk/rpc.h 00:02:47.006 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:47.006 TEST_HEADER include/spdk/scheduler.h 00:02:47.006 TEST_HEADER include/spdk/scsi.h 00:02:47.006 TEST_HEADER include/spdk/scsi_spec.h 00:02:47.006 TEST_HEADER include/spdk/sock.h 00:02:47.006 TEST_HEADER include/spdk/stdinc.h 00:02:47.006 TEST_HEADER include/spdk/string.h 00:02:47.006 TEST_HEADER include/spdk/thread.h 00:02:47.006 TEST_HEADER include/spdk/trace.h 00:02:47.006 TEST_HEADER include/spdk/trace_parser.h 00:02:47.006 TEST_HEADER include/spdk/tree.h 00:02:47.006 TEST_HEADER include/spdk/ublk.h 00:02:47.006 TEST_HEADER include/spdk/util.h 00:02:47.006 TEST_HEADER include/spdk/uuid.h 00:02:47.006 TEST_HEADER include/spdk/version.h 00:02:47.006 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:47.006 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:47.006 TEST_HEADER include/spdk/vhost.h 00:02:47.006 TEST_HEADER include/spdk/vmd.h 00:02:47.006 TEST_HEADER include/spdk/xor.h 00:02:47.006 TEST_HEADER include/spdk/zipf.h 00:02:47.006 CXX test/cpp_headers/accel.o 00:02:47.006 CXX test/cpp_headers/accel_module.o 00:02:47.007 CXX test/cpp_headers/assert.o 00:02:47.007 CC app/nvmf_tgt/nvmf_main.o 00:02:47.007 CXX test/cpp_headers/barrier.o 00:02:47.007 CXX test/cpp_headers/base64.o 00:02:47.007 CXX test/cpp_headers/bdev.o 00:02:47.007 CXX test/cpp_headers/bdev_module.o 00:02:47.007 CXX test/cpp_headers/bdev_zone.o 00:02:47.007 CXX test/cpp_headers/bit_array.o 00:02:47.007 CXX test/cpp_headers/bit_pool.o 00:02:47.007 CXX test/cpp_headers/blob_bdev.o 00:02:47.007 CXX test/cpp_headers/blobfs_bdev.o 00:02:47.007 CXX test/cpp_headers/blobfs.o 00:02:47.007 CXX test/cpp_headers/blob.o 00:02:47.007 CXX test/cpp_headers/conf.o 00:02:47.007 CC app/spdk_dd/spdk_dd.o 00:02:47.007 CXX test/cpp_headers/config.o 00:02:47.007 CC app/iscsi_tgt/iscsi_tgt.o 00:02:47.007 CXX test/cpp_headers/cpuset.o 00:02:47.007 CXX test/cpp_headers/crc16.o 00:02:47.007 CC test/app/jsoncat/jsoncat.o 00:02:47.007 CC app/spdk_tgt/spdk_tgt.o 00:02:47.007 CXX test/cpp_headers/crc32.o 00:02:47.007 CC test/app/histogram_perf/histogram_perf.o 00:02:47.007 CC examples/ioat/verify/verify.o 00:02:47.007 CC examples/util/zipf/zipf.o 00:02:47.007 CC test/env/pci/pci_ut.o 00:02:47.007 CC examples/ioat/perf/perf.o 00:02:47.007 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:47.007 CC test/env/memory/memory_ut.o 00:02:47.007 CC test/env/vtophys/vtophys.o 00:02:47.007 CC test/app/stub/stub.o 00:02:47.007 CC app/fio/nvme/fio_plugin.o 00:02:47.007 CC test/thread/poller_perf/poller_perf.o 00:02:47.007 CC test/dma/test_dma/test_dma.o 00:02:47.007 CC test/app/bdev_svc/bdev_svc.o 00:02:47.007 CC app/fio/bdev/fio_plugin.o 00:02:47.270 CC test/env/mem_callbacks/mem_callbacks.o 00:02:47.270 LINK spdk_lspci 00:02:47.270 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:47.270 LINK rpc_client_test 00:02:47.270 LINK jsoncat 00:02:47.270 LINK histogram_perf 00:02:47.270 LINK spdk_nvme_discover 00:02:47.270 LINK interrupt_tgt 00:02:47.270 LINK nvmf_tgt 00:02:47.270 LINK poller_perf 00:02:47.270 LINK zipf 00:02:47.270 CXX test/cpp_headers/crc64.o 00:02:47.270 LINK vtophys 00:02:47.270 CXX test/cpp_headers/dma.o 00:02:47.270 CXX test/cpp_headers/dif.o 00:02:47.270 LINK env_dpdk_post_init 00:02:47.270 CXX test/cpp_headers/endian.o 00:02:47.270 CXX test/cpp_headers/env_dpdk.o 00:02:47.270 CXX test/cpp_headers/env.o 00:02:47.270 CXX test/cpp_headers/event.o 00:02:47.270 LINK iscsi_tgt 00:02:47.534 CXX test/cpp_headers/fd_group.o 00:02:47.534 CXX test/cpp_headers/fd.o 00:02:47.534 CXX test/cpp_headers/file.o 00:02:47.534 CXX test/cpp_headers/ftl.o 00:02:47.534 CXX test/cpp_headers/gpt_spec.o 00:02:47.534 LINK stub 00:02:47.534 CXX test/cpp_headers/hexlify.o 00:02:47.534 LINK spdk_tgt 00:02:47.534 CXX test/cpp_headers/histogram_data.o 00:02:47.534 CXX test/cpp_headers/idxd.o 00:02:47.534 LINK spdk_trace_record 00:02:47.534 LINK bdev_svc 00:02:47.534 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:47.534 CXX test/cpp_headers/idxd_spec.o 00:02:47.534 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:47.534 LINK verify 00:02:47.534 LINK ioat_perf 00:02:47.534 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:47.534 CXX test/cpp_headers/init.o 00:02:47.534 CXX test/cpp_headers/ioat.o 00:02:47.534 CXX test/cpp_headers/ioat_spec.o 00:02:47.534 CXX test/cpp_headers/iscsi_spec.o 00:02:47.797 CXX test/cpp_headers/json.o 00:02:47.797 CXX test/cpp_headers/jsonrpc.o 00:02:47.797 CXX test/cpp_headers/keyring.o 00:02:47.797 CXX test/cpp_headers/keyring_module.o 00:02:47.797 LINK spdk_trace 00:02:47.797 CXX test/cpp_headers/likely.o 00:02:47.797 CXX test/cpp_headers/log.o 00:02:47.797 CXX test/cpp_headers/lvol.o 00:02:47.797 CXX test/cpp_headers/memory.o 00:02:47.797 CXX test/cpp_headers/mmio.o 00:02:47.797 CXX test/cpp_headers/nbd.o 00:02:47.797 CXX test/cpp_headers/net.o 00:02:47.797 LINK spdk_dd 00:02:47.797 CXX test/cpp_headers/notify.o 00:02:47.797 CXX test/cpp_headers/nvme.o 00:02:47.797 CXX test/cpp_headers/nvme_intel.o 00:02:47.797 CXX test/cpp_headers/nvme_ocssd.o 00:02:47.797 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:47.797 CXX test/cpp_headers/nvme_spec.o 00:02:47.797 CXX test/cpp_headers/nvme_zns.o 00:02:47.797 CXX test/cpp_headers/nvmf_cmd.o 00:02:47.797 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:47.797 CXX test/cpp_headers/nvmf.o 00:02:47.797 CXX test/cpp_headers/nvmf_spec.o 00:02:48.065 LINK test_dma 00:02:48.065 CXX test/cpp_headers/nvmf_transport.o 00:02:48.065 LINK pci_ut 00:02:48.065 CXX test/cpp_headers/opal.o 00:02:48.065 CC test/event/event_perf/event_perf.o 00:02:48.065 CC test/event/reactor/reactor.o 00:02:48.065 CXX test/cpp_headers/opal_spec.o 00:02:48.065 CC test/event/reactor_perf/reactor_perf.o 00:02:48.065 CXX test/cpp_headers/pci_ids.o 00:02:48.065 CXX test/cpp_headers/pipe.o 00:02:48.065 CC test/event/app_repeat/app_repeat.o 00:02:48.065 CC examples/vmd/lsvmd/lsvmd.o 00:02:48.065 CC examples/sock/hello_world/hello_sock.o 00:02:48.065 CC examples/idxd/perf/perf.o 00:02:48.065 CC examples/thread/thread/thread_ex.o 00:02:48.065 CC test/event/scheduler/scheduler.o 00:02:48.065 CXX test/cpp_headers/queue.o 00:02:48.065 CXX test/cpp_headers/reduce.o 00:02:48.065 CXX test/cpp_headers/rpc.o 00:02:48.065 CXX test/cpp_headers/scheduler.o 00:02:48.065 CXX test/cpp_headers/scsi.o 00:02:48.328 CXX test/cpp_headers/scsi_spec.o 00:02:48.328 CC examples/vmd/led/led.o 00:02:48.328 CXX test/cpp_headers/sock.o 00:02:48.328 CXX test/cpp_headers/stdinc.o 00:02:48.328 CXX test/cpp_headers/string.o 00:02:48.328 CXX test/cpp_headers/thread.o 00:02:48.328 CXX test/cpp_headers/trace.o 00:02:48.328 LINK nvme_fuzz 00:02:48.328 CXX test/cpp_headers/trace_parser.o 00:02:48.328 LINK spdk_bdev 00:02:48.328 CXX test/cpp_headers/tree.o 00:02:48.328 CXX test/cpp_headers/ublk.o 00:02:48.328 CXX test/cpp_headers/util.o 00:02:48.328 CXX test/cpp_headers/uuid.o 00:02:48.328 CXX test/cpp_headers/version.o 00:02:48.328 LINK event_perf 00:02:48.328 LINK reactor 00:02:48.328 CXX test/cpp_headers/vfio_user_pci.o 00:02:48.328 LINK reactor_perf 00:02:48.328 CXX test/cpp_headers/vfio_user_spec.o 00:02:48.328 CXX test/cpp_headers/vhost.o 00:02:48.328 CXX test/cpp_headers/vmd.o 00:02:48.328 CXX test/cpp_headers/xor.o 00:02:48.328 CC app/vhost/vhost.o 00:02:48.328 LINK lsvmd 00:02:48.328 CXX test/cpp_headers/zipf.o 00:02:48.328 LINK spdk_nvme 00:02:48.328 LINK mem_callbacks 00:02:48.328 LINK app_repeat 00:02:48.588 LINK vhost_fuzz 00:02:48.588 LINK led 00:02:48.588 LINK scheduler 00:02:48.588 LINK thread 00:02:48.588 CC test/nvme/startup/startup.o 00:02:48.588 CC test/nvme/e2edp/nvme_dp.o 00:02:48.588 CC test/nvme/reset/reset.o 00:02:48.588 CC test/nvme/overhead/overhead.o 00:02:48.588 CC test/nvme/err_injection/err_injection.o 00:02:48.588 CC test/nvme/aer/aer.o 00:02:48.588 CC test/nvme/sgl/sgl.o 00:02:48.588 LINK hello_sock 00:02:48.588 CC test/nvme/boot_partition/boot_partition.o 00:02:48.588 CC test/nvme/simple_copy/simple_copy.o 00:02:48.588 CC test/nvme/connect_stress/connect_stress.o 00:02:48.588 CC test/nvme/reserve/reserve.o 00:02:48.846 CC test/accel/dif/dif.o 00:02:48.846 CC test/nvme/compliance/nvme_compliance.o 00:02:48.846 LINK vhost 00:02:48.846 CC test/nvme/fused_ordering/fused_ordering.o 00:02:48.846 CC test/blobfs/mkfs/mkfs.o 00:02:48.846 CC test/nvme/fdp/fdp.o 00:02:48.846 CC test/nvme/cuse/cuse.o 00:02:48.846 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:48.846 CC test/lvol/esnap/esnap.o 00:02:48.846 LINK spdk_nvme_perf 00:02:48.846 LINK idxd_perf 00:02:48.846 LINK startup 00:02:48.846 LINK spdk_nvme_identify 00:02:49.104 LINK fused_ordering 00:02:49.104 LINK boot_partition 00:02:49.104 LINK reserve 00:02:49.104 LINK mkfs 00:02:49.104 LINK simple_copy 00:02:49.104 LINK spdk_top 00:02:49.104 LINK connect_stress 00:02:49.104 LINK err_injection 00:02:49.104 LINK sgl 00:02:49.104 LINK nvme_dp 00:02:49.104 LINK aer 00:02:49.104 LINK overhead 00:02:49.104 CC examples/accel/perf/accel_perf.o 00:02:49.104 LINK doorbell_aers 00:02:49.104 CC examples/blob/hello_world/hello_blob.o 00:02:49.104 CC examples/blob/cli/blobcli.o 00:02:49.104 CC examples/nvme/reconnect/reconnect.o 00:02:49.104 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:49.104 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:49.104 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:49.104 CC examples/nvme/hotplug/hotplug.o 00:02:49.104 CC examples/nvme/hello_world/hello_world.o 00:02:49.104 CC examples/nvme/abort/abort.o 00:02:49.363 CC examples/nvme/arbitration/arbitration.o 00:02:49.363 LINK reset 00:02:49.363 LINK fdp 00:02:49.363 LINK nvme_compliance 00:02:49.363 LINK dif 00:02:49.363 LINK memory_ut 00:02:49.363 LINK cmb_copy 00:02:49.363 LINK pmr_persistence 00:02:49.620 LINK hotplug 00:02:49.620 LINK hello_blob 00:02:49.620 LINK hello_world 00:02:49.620 LINK arbitration 00:02:49.620 LINK reconnect 00:02:49.877 LINK accel_perf 00:02:49.877 LINK abort 00:02:49.877 CC test/bdev/bdevio/bdevio.o 00:02:49.877 LINK blobcli 00:02:49.877 LINK nvme_manage 00:02:50.134 CC examples/bdev/hello_world/hello_bdev.o 00:02:50.134 CC examples/bdev/bdevperf/bdevperf.o 00:02:50.392 LINK bdevio 00:02:50.392 LINK hello_bdev 00:02:50.392 LINK iscsi_fuzz 00:02:50.649 LINK cuse 00:02:51.217 LINK bdevperf 00:02:51.475 CC examples/nvmf/nvmf/nvmf.o 00:02:51.735 LINK nvmf 00:02:55.960 LINK esnap 00:02:55.960 00:02:55.960 real 1m15.122s 00:02:55.960 user 11m18.220s 00:02:55.960 sys 2m23.983s 00:02:55.960 16:08:15 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:55.960 16:08:15 make -- common/autotest_common.sh@10 -- $ set +x 00:02:55.960 ************************************ 00:02:55.960 END TEST make 00:02:55.960 ************************************ 00:02:55.960 16:08:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:55.960 16:08:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:55.960 16:08:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:55.960 16:08:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.960 16:08:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:55.960 16:08:15 -- pm/common@44 -- $ pid=424446 00:02:55.960 16:08:15 -- pm/common@50 -- $ kill -TERM 424446 00:02:55.960 16:08:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.960 16:08:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:55.960 16:08:15 -- pm/common@44 -- $ pid=424448 00:02:55.960 16:08:15 -- pm/common@50 -- $ kill -TERM 424448 00:02:55.960 16:08:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.960 16:08:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:55.960 16:08:15 -- pm/common@44 -- $ pid=424450 00:02:55.960 16:08:15 -- pm/common@50 -- $ kill -TERM 424450 00:02:55.960 16:08:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.960 16:08:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:55.960 16:08:15 -- pm/common@44 -- $ pid=424478 00:02:55.960 16:08:15 -- pm/common@50 -- $ sudo -E kill -TERM 424478 00:02:55.960 16:08:15 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:55.960 16:08:15 -- nvmf/common.sh@7 -- # uname -s 00:02:55.960 16:08:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:55.960 16:08:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:55.960 16:08:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:55.960 16:08:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:55.960 16:08:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:55.960 16:08:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:55.960 16:08:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:55.960 16:08:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:55.960 16:08:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:55.960 16:08:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:55.960 16:08:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:55.960 16:08:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:55.960 16:08:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:55.960 16:08:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:55.960 16:08:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:55.960 16:08:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:55.960 16:08:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:55.960 16:08:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:55.960 16:08:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:55.960 16:08:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:55.960 16:08:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.961 16:08:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.961 16:08:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.961 16:08:15 -- paths/export.sh@5 -- # export PATH 00:02:55.961 16:08:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.961 16:08:15 -- nvmf/common.sh@47 -- # : 0 00:02:55.961 16:08:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:55.961 16:08:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:55.961 16:08:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:55.961 16:08:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:55.961 16:08:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:55.961 16:08:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:55.961 16:08:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:55.961 16:08:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:55.961 16:08:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:55.961 16:08:15 -- spdk/autotest.sh@32 -- # uname -s 00:02:55.961 16:08:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:55.961 16:08:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:55.961 16:08:15 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:55.961 16:08:15 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:55.961 16:08:15 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:55.961 16:08:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:55.961 16:08:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:55.961 16:08:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:55.961 16:08:15 -- spdk/autotest.sh@48 -- # udevadm_pid=482626 00:02:55.961 16:08:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:55.961 16:08:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:55.961 16:08:15 -- pm/common@17 -- # local monitor 00:02:55.961 16:08:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.961 16:08:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.961 16:08:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.961 16:08:15 -- pm/common@21 -- # date +%s 00:02:55.961 16:08:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.961 16:08:15 -- pm/common@21 -- # date +%s 00:02:55.961 16:08:15 -- pm/common@25 -- # sleep 1 00:02:55.961 16:08:15 -- pm/common@21 -- # date +%s 00:02:55.961 16:08:15 -- pm/common@21 -- # date +%s 00:02:55.961 16:08:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722002895 00:02:55.961 16:08:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722002895 00:02:55.961 16:08:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722002895 00:02:55.961 16:08:15 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722002895 00:02:55.961 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722002895_collect-vmstat.pm.log 00:02:55.961 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722002895_collect-cpu-load.pm.log 00:02:55.961 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722002895_collect-cpu-temp.pm.log 00:02:55.961 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722002895_collect-bmc-pm.bmc.pm.log 00:02:56.897 16:08:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:56.897 16:08:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:56.897 16:08:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:56.897 16:08:16 -- common/autotest_common.sh@10 -- # set +x 00:02:56.897 16:08:16 -- spdk/autotest.sh@59 -- # create_test_list 00:02:56.897 16:08:16 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:56.897 16:08:16 -- common/autotest_common.sh@10 -- # set +x 00:02:56.897 16:08:16 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:56.897 16:08:16 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:56.897 16:08:16 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:56.897 16:08:16 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:56.897 16:08:16 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:56.897 16:08:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:56.897 16:08:16 -- common/autotest_common.sh@1455 -- # uname 00:02:56.897 16:08:16 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:56.897 16:08:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:56.897 16:08:16 -- common/autotest_common.sh@1475 -- # uname 00:02:56.897 16:08:16 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:56.897 16:08:16 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:56.897 16:08:16 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:56.897 16:08:16 -- spdk/autotest.sh@72 -- # hash lcov 00:02:56.897 16:08:16 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:56.897 16:08:16 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:56.897 --rc lcov_branch_coverage=1 00:02:56.897 --rc lcov_function_coverage=1 00:02:56.897 --rc genhtml_branch_coverage=1 00:02:56.897 --rc genhtml_function_coverage=1 00:02:56.897 --rc genhtml_legend=1 00:02:56.897 --rc geninfo_all_blocks=1 00:02:56.897 ' 00:02:56.897 16:08:16 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:56.897 --rc lcov_branch_coverage=1 00:02:56.897 --rc lcov_function_coverage=1 00:02:56.897 --rc genhtml_branch_coverage=1 00:02:56.897 --rc genhtml_function_coverage=1 00:02:56.897 --rc genhtml_legend=1 00:02:56.897 --rc geninfo_all_blocks=1 00:02:56.897 ' 00:02:56.897 16:08:16 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:56.897 --rc lcov_branch_coverage=1 00:02:56.897 --rc lcov_function_coverage=1 00:02:56.897 --rc genhtml_branch_coverage=1 00:02:56.897 --rc genhtml_function_coverage=1 00:02:56.897 --rc genhtml_legend=1 00:02:56.897 --rc geninfo_all_blocks=1 00:02:56.897 --no-external' 00:02:56.897 16:08:16 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:56.897 --rc lcov_branch_coverage=1 00:02:56.897 --rc lcov_function_coverage=1 00:02:56.897 --rc genhtml_branch_coverage=1 00:02:56.897 --rc genhtml_function_coverage=1 00:02:56.897 --rc genhtml_legend=1 00:02:56.897 --rc geninfo_all_blocks=1 00:02:56.897 --no-external' 00:02:56.897 16:08:16 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:56.897 lcov: LCOV version 1.14 00:02:56.897 16:08:16 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:14.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:14.991 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:27.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:27.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:27.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:27.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:29.729 16:08:49 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:29.729 16:08:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:29.729 16:08:49 -- common/autotest_common.sh@10 -- # set +x 00:03:29.729 16:08:49 -- spdk/autotest.sh@91 -- # rm -f 00:03:29.729 16:08:49 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.666 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:30.666 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:30.666 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:30.667 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:30.667 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:30.667 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:30.667 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:30.667 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:30.667 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:30.667 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:30.667 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:30.667 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:30.667 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:30.667 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:30.667 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:30.667 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:30.667 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:30.925 16:08:50 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:30.925 16:08:50 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:30.925 16:08:50 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:30.925 16:08:50 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:30.925 16:08:50 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:30.925 16:08:50 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:30.925 16:08:50 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:30.925 16:08:50 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:30.925 16:08:50 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:30.925 16:08:50 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:30.925 16:08:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:30.925 16:08:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:30.925 16:08:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:30.925 16:08:50 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:30.925 16:08:50 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:30.925 No valid GPT data, bailing 00:03:30.925 16:08:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:30.925 16:08:50 -- scripts/common.sh@391 -- # pt= 00:03:30.925 16:08:50 -- scripts/common.sh@392 -- # return 1 00:03:30.925 16:08:50 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:30.925 1+0 records in 00:03:30.925 1+0 records out 00:03:30.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00226501 s, 463 MB/s 00:03:30.925 16:08:50 -- spdk/autotest.sh@118 -- # sync 00:03:30.925 16:08:50 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:30.925 16:08:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:30.925 16:08:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:32.826 16:08:52 -- spdk/autotest.sh@124 -- # uname -s 00:03:32.826 16:08:52 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:32.826 16:08:52 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:32.826 16:08:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:32.826 16:08:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:32.826 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:03:32.826 ************************************ 00:03:32.826 START TEST setup.sh 00:03:32.826 ************************************ 00:03:32.826 16:08:52 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:33.084 * Looking for test storage... 00:03:33.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:33.084 16:08:52 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:33.084 16:08:52 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:33.084 16:08:52 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:33.084 16:08:52 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:33.084 16:08:52 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:33.084 16:08:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:33.084 ************************************ 00:03:33.084 START TEST acl 00:03:33.084 ************************************ 00:03:33.084 16:08:52 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:33.084 * Looking for test storage... 00:03:33.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:33.084 16:08:52 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:33.084 16:08:52 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:33.084 16:08:52 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:33.084 16:08:52 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:33.084 16:08:52 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:33.084 16:08:52 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:33.084 16:08:52 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:33.084 16:08:52 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:33.084 16:08:52 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:33.084 16:08:52 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:33.084 16:08:52 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:33.084 16:08:52 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:33.084 16:08:52 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:33.084 16:08:52 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:33.084 16:08:52 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:33.084 16:08:52 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.474 16:08:54 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:34.474 16:08:54 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:34.474 16:08:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.474 16:08:54 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:34.474 16:08:54 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.474 16:08:54 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:35.447 Hugepages 00:03:35.447 node hugesize free / total 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.447 00:03:35.447 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.447 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:35.705 16:08:55 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:35.705 16:08:55 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:35.705 16:08:55 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:35.705 16:08:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:35.705 ************************************ 00:03:35.705 START TEST denied 00:03:35.705 ************************************ 00:03:35.705 16:08:55 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:35.705 16:08:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:35.705 16:08:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:35.705 16:08:55 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:35.705 16:08:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.705 16:08:55 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:37.081 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:37.081 16:08:56 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:37.081 16:08:56 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:37.081 16:08:56 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:37.081 16:08:56 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:37.081 16:08:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:37.081 16:08:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:37.081 16:08:56 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:37.081 16:08:56 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:37.081 16:08:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.081 16:08:56 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.616 00:03:39.616 real 0m3.819s 00:03:39.616 user 0m1.095s 00:03:39.616 sys 0m1.812s 00:03:39.616 16:08:59 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:39.616 16:08:59 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:39.616 ************************************ 00:03:39.616 END TEST denied 00:03:39.616 ************************************ 00:03:39.616 16:08:59 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:39.616 16:08:59 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.616 16:08:59 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.616 16:08:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:39.616 ************************************ 00:03:39.616 START TEST allowed 00:03:39.616 ************************************ 00:03:39.616 16:08:59 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:39.616 16:08:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:39.616 16:08:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:39.616 16:08:59 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:39.616 16:08:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.616 16:08:59 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:42.147 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.147 16:09:01 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:42.147 16:09:01 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:42.147 16:09:01 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:42.147 16:09:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.147 16:09:01 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.523 00:03:43.523 real 0m3.747s 00:03:43.523 user 0m0.974s 00:03:43.523 sys 0m1.619s 00:03:43.523 16:09:02 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.523 16:09:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:43.523 ************************************ 00:03:43.523 END TEST allowed 00:03:43.523 ************************************ 00:03:43.523 00:03:43.523 real 0m10.305s 00:03:43.523 user 0m3.124s 00:03:43.523 sys 0m5.180s 00:03:43.523 16:09:02 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.523 16:09:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:43.523 ************************************ 00:03:43.523 END TEST acl 00:03:43.523 ************************************ 00:03:43.523 16:09:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:43.523 16:09:02 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.523 16:09:02 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.523 16:09:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:43.523 ************************************ 00:03:43.523 START TEST hugepages 00:03:43.523 ************************************ 00:03:43.523 16:09:02 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:43.523 * Looking for test storage... 00:03:43.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43495248 kB' 'MemAvailable: 46998744 kB' 'Buffers: 2704 kB' 'Cached: 10448016 kB' 'SwapCached: 0 kB' 'Active: 7456276 kB' 'Inactive: 3506192 kB' 'Active(anon): 7060780 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514928 kB' 'Mapped: 174236 kB' 'Shmem: 6549032 kB' 'KReclaimable: 191292 kB' 'Slab: 560312 kB' 'SReclaimable: 191292 kB' 'SUnreclaim: 369020 kB' 'KernelStack: 12912 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 8146740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196116 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.523 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.524 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:43.525 16:09:03 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:43.525 16:09:03 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.525 16:09:03 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.525 16:09:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.525 ************************************ 00:03:43.525 START TEST default_setup 00:03:43.525 ************************************ 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.525 16:09:03 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.898 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:44.898 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:44.898 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:44.898 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:44.898 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:44.898 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:44.898 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:44.898 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:44.898 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:44.898 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:44.898 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:44.898 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:44.898 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:44.898 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:44.898 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:44.898 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:45.841 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45613400 kB' 'MemAvailable: 49116896 kB' 'Buffers: 2704 kB' 'Cached: 10448100 kB' 'SwapCached: 0 kB' 'Active: 7474784 kB' 'Inactive: 3506192 kB' 'Active(anon): 7079288 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533368 kB' 'Mapped: 174240 kB' 'Shmem: 6549116 kB' 'KReclaimable: 191292 kB' 'Slab: 560064 kB' 'SReclaimable: 191292 kB' 'SUnreclaim: 368772 kB' 'KernelStack: 12800 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8167736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196196 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.841 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.842 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.843 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45613372 kB' 'MemAvailable: 49116868 kB' 'Buffers: 2704 kB' 'Cached: 10448104 kB' 'SwapCached: 0 kB' 'Active: 7474260 kB' 'Inactive: 3506192 kB' 'Active(anon): 7078764 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532852 kB' 'Mapped: 174272 kB' 'Shmem: 6549120 kB' 'KReclaimable: 191292 kB' 'Slab: 560048 kB' 'SReclaimable: 191292 kB' 'SUnreclaim: 368756 kB' 'KernelStack: 12880 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8167756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.844 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.845 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45613380 kB' 'MemAvailable: 49116876 kB' 'Buffers: 2704 kB' 'Cached: 10448120 kB' 'SwapCached: 0 kB' 'Active: 7474148 kB' 'Inactive: 3506192 kB' 'Active(anon): 7078652 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532752 kB' 'Mapped: 174272 kB' 'Shmem: 6549136 kB' 'KReclaimable: 191292 kB' 'Slab: 560132 kB' 'SReclaimable: 191292 kB' 'SUnreclaim: 368840 kB' 'KernelStack: 12848 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8167776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.846 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.847 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.848 nr_hugepages=1024 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.848 resv_hugepages=0 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.848 surplus_hugepages=0 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.848 anon_hugepages=0 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.848 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45613128 kB' 'MemAvailable: 49116624 kB' 'Buffers: 2704 kB' 'Cached: 10448140 kB' 'SwapCached: 0 kB' 'Active: 7474072 kB' 'Inactive: 3506192 kB' 'Active(anon): 7078576 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532576 kB' 'Mapped: 174272 kB' 'Shmem: 6549156 kB' 'KReclaimable: 191292 kB' 'Slab: 560132 kB' 'SReclaimable: 191292 kB' 'SUnreclaim: 368840 kB' 'KernelStack: 12864 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8167796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196196 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.849 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:45.850 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20762732 kB' 'MemUsed: 12114208 kB' 'SwapCached: 0 kB' 'Active: 5516668 kB' 'Inactive: 3357228 kB' 'Active(anon): 5244736 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8718256 kB' 'Mapped: 93796 kB' 'AnonPages: 158840 kB' 'Shmem: 5089096 kB' 'KernelStack: 6968 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94100 kB' 'Slab: 310320 kB' 'SReclaimable: 94100 kB' 'SUnreclaim: 216220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.851 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.852 node0=1024 expecting 1024 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.852 00:03:45.852 real 0m2.479s 00:03:45.852 user 0m0.691s 00:03:45.852 sys 0m0.921s 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.852 16:09:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:45.852 ************************************ 00:03:45.852 END TEST default_setup 00:03:45.852 ************************************ 00:03:46.110 16:09:05 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:46.110 16:09:05 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.110 16:09:05 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.110 16:09:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.110 ************************************ 00:03:46.110 START TEST per_node_1G_alloc 00:03:46.110 ************************************ 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.110 16:09:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.047 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:47.047 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:47.047 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:47.047 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:47.047 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:47.047 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:47.047 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:47.047 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:47.047 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:47.047 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:47.047 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:47.047 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:47.047 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:47.047 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:47.047 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:47.047 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:47.047 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45630956 kB' 'MemAvailable: 49134452 kB' 'Buffers: 2704 kB' 'Cached: 10448220 kB' 'SwapCached: 0 kB' 'Active: 7470940 kB' 'Inactive: 3506192 kB' 'Active(anon): 7075444 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529140 kB' 'Mapped: 173320 kB' 'Shmem: 6549236 kB' 'KReclaimable: 191292 kB' 'Slab: 559928 kB' 'SReclaimable: 191292 kB' 'SUnreclaim: 368636 kB' 'KernelStack: 13056 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8163692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196320 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.312 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.313 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45629084 kB' 'MemAvailable: 49132580 kB' 'Buffers: 2704 kB' 'Cached: 10448220 kB' 'SwapCached: 0 kB' 'Active: 7471240 kB' 'Inactive: 3506192 kB' 'Active(anon): 7075744 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529828 kB' 'Mapped: 173260 kB' 'Shmem: 6549236 kB' 'KReclaimable: 191292 kB' 'Slab: 559932 kB' 'SReclaimable: 191292 kB' 'SUnreclaim: 368640 kB' 'KernelStack: 13152 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8165100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196352 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.315 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45630516 kB' 'MemAvailable: 49134012 kB' 'Buffers: 2704 kB' 'Cached: 10448240 kB' 'SwapCached: 0 kB' 'Active: 7470376 kB' 'Inactive: 3506192 kB' 'Active(anon): 7074880 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529480 kB' 'Mapped: 173252 kB' 'Shmem: 6549256 kB' 'KReclaimable: 191292 kB' 'Slab: 559996 kB' 'SReclaimable: 191292 kB' 'SUnreclaim: 368704 kB' 'KernelStack: 13152 kB' 'PageTables: 9308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8165120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.319 nr_hugepages=1024 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.319 resv_hugepages=0 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.319 surplus_hugepages=0 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.319 anon_hugepages=0 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45629616 kB' 'MemAvailable: 49133112 kB' 'Buffers: 2704 kB' 'Cached: 10448260 kB' 'SwapCached: 0 kB' 'Active: 7470696 kB' 'Inactive: 3506192 kB' 'Active(anon): 7075200 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529392 kB' 'Mapped: 173312 kB' 'Shmem: 6549276 kB' 'KReclaimable: 191292 kB' 'Slab: 559960 kB' 'SReclaimable: 191292 kB' 'SUnreclaim: 368668 kB' 'KernelStack: 13152 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8164120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196320 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.319 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.320 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21795604 kB' 'MemUsed: 11081336 kB' 'SwapCached: 0 kB' 'Active: 5517604 kB' 'Inactive: 3357228 kB' 'Active(anon): 5245672 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8718308 kB' 'Mapped: 93092 kB' 'AnonPages: 159924 kB' 'Shmem: 5089148 kB' 'KernelStack: 7256 kB' 'PageTables: 4752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94100 kB' 'Slab: 310248 kB' 'SReclaimable: 94100 kB' 'SUnreclaim: 216148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.321 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.322 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 23833876 kB' 'MemUsed: 3830896 kB' 'SwapCached: 0 kB' 'Active: 1952940 kB' 'Inactive: 148964 kB' 'Active(anon): 1829376 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1732696 kB' 'Mapped: 80340 kB' 'AnonPages: 369256 kB' 'Shmem: 1460168 kB' 'KernelStack: 5912 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97192 kB' 'Slab: 249696 kB' 'SReclaimable: 97192 kB' 'SUnreclaim: 152504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.323 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.324 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:47.325 node0=512 expecting 512 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:47.325 node1=512 expecting 512 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:47.325 00:03:47.325 real 0m1.421s 00:03:47.325 user 0m0.614s 00:03:47.325 sys 0m0.768s 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:47.325 16:09:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.325 ************************************ 00:03:47.325 END TEST per_node_1G_alloc 00:03:47.325 ************************************ 00:03:47.584 16:09:07 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:47.584 16:09:07 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.584 16:09:07 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.584 16:09:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.584 ************************************ 00:03:47.584 START TEST even_2G_alloc 00:03:47.584 ************************************ 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.584 16:09:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.520 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:48.520 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:48.520 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:48.520 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:48.520 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:48.520 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:48.520 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:48.520 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:48.520 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:48.520 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:48.520 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:48.520 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:48.520 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:48.520 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:48.520 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:48.520 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:48.520 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45612740 kB' 'MemAvailable: 49116236 kB' 'Buffers: 2704 kB' 'Cached: 10448360 kB' 'SwapCached: 0 kB' 'Active: 7479376 kB' 'Inactive: 3506192 kB' 'Active(anon): 7083880 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537668 kB' 'Mapped: 174228 kB' 'Shmem: 6549376 kB' 'KReclaimable: 191292 kB' 'Slab: 559908 kB' 'SReclaimable: 191292 kB' 'SUnreclaim: 368616 kB' 'KernelStack: 12880 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8173324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196148 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.786 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.787 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45610056 kB' 'MemAvailable: 49113552 kB' 'Buffers: 2704 kB' 'Cached: 10448364 kB' 'SwapCached: 0 kB' 'Active: 7476604 kB' 'Inactive: 3506192 kB' 'Active(anon): 7081108 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534984 kB' 'Mapped: 174304 kB' 'Shmem: 6549380 kB' 'KReclaimable: 191292 kB' 'Slab: 559940 kB' 'SReclaimable: 191292 kB' 'SUnreclaim: 368648 kB' 'KernelStack: 12976 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8169944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196164 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.788 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.789 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45608904 kB' 'MemAvailable: 49112400 kB' 'Buffers: 2704 kB' 'Cached: 10448380 kB' 'SwapCached: 0 kB' 'Active: 7473012 kB' 'Inactive: 3506192 kB' 'Active(anon): 7077516 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531492 kB' 'Mapped: 174272 kB' 'Shmem: 6549396 kB' 'KReclaimable: 191292 kB' 'Slab: 559940 kB' 'SReclaimable: 191292 kB' 'SUnreclaim: 368648 kB' 'KernelStack: 12960 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8165660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.790 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.791 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.792 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.793 nr_hugepages=1024 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.793 resv_hugepages=0 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.793 surplus_hugepages=0 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.793 anon_hugepages=0 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45602420 kB' 'MemAvailable: 49105916 kB' 'Buffers: 2704 kB' 'Cached: 10448400 kB' 'SwapCached: 0 kB' 'Active: 7475312 kB' 'Inactive: 3506192 kB' 'Active(anon): 7079816 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534032 kB' 'Mapped: 174140 kB' 'Shmem: 6549416 kB' 'KReclaimable: 191292 kB' 'Slab: 559900 kB' 'SReclaimable: 191292 kB' 'SUnreclaim: 368608 kB' 'KernelStack: 12912 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8169128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196068 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.793 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.794 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21779808 kB' 'MemUsed: 11097132 kB' 'SwapCached: 0 kB' 'Active: 5516060 kB' 'Inactive: 3357228 kB' 'Active(anon): 5244128 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8718372 kB' 'Mapped: 93784 kB' 'AnonPages: 158048 kB' 'Shmem: 5089212 kB' 'KernelStack: 6904 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94100 kB' 'Slab: 310300 kB' 'SReclaimable: 94100 kB' 'SUnreclaim: 216200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.795 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.796 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 23822652 kB' 'MemUsed: 3842120 kB' 'SwapCached: 0 kB' 'Active: 1959448 kB' 'Inactive: 148964 kB' 'Active(anon): 1835884 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1732756 kB' 'Mapped: 80500 kB' 'AnonPages: 375768 kB' 'Shmem: 1460228 kB' 'KernelStack: 6024 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97192 kB' 'Slab: 249600 kB' 'SReclaimable: 97192 kB' 'SUnreclaim: 152408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.797 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:48.798 node0=512 expecting 512 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:48.798 node1=512 expecting 512 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:48.798 00:03:48.798 real 0m1.356s 00:03:48.798 user 0m0.587s 00:03:48.798 sys 0m0.729s 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.798 16:09:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.798 ************************************ 00:03:48.798 END TEST even_2G_alloc 00:03:48.798 ************************************ 00:03:48.798 16:09:08 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:48.798 16:09:08 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.798 16:09:08 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.798 16:09:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.798 ************************************ 00:03:48.798 START TEST odd_alloc 00:03:48.798 ************************************ 00:03:48.798 16:09:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:48.798 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:48.798 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:48.798 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.798 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.798 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:48.798 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.798 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.798 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.798 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:48.798 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.798 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.798 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.798 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.799 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.799 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.799 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.799 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:48.799 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:48.799 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.799 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:48.799 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:48.799 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:48.799 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.799 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:48.799 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:48.799 16:09:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:48.799 16:09:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.799 16:09:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.191 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:50.191 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:50.191 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:50.191 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:50.191 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:50.191 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:50.191 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:50.191 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:50.191 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:50.191 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:50.191 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:50.191 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:50.191 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:50.191 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:50.191 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:50.191 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:50.191 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45602532 kB' 'MemAvailable: 49106024 kB' 'Buffers: 2704 kB' 'Cached: 10448492 kB' 'SwapCached: 0 kB' 'Active: 7464580 kB' 'Inactive: 3506192 kB' 'Active(anon): 7069084 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522752 kB' 'Mapped: 172632 kB' 'Shmem: 6549508 kB' 'KReclaimable: 191284 kB' 'Slab: 559868 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368584 kB' 'KernelStack: 12752 kB' 'PageTables: 7476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 8144156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.191 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.192 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45609260 kB' 'MemAvailable: 49112752 kB' 'Buffers: 2704 kB' 'Cached: 10448492 kB' 'SwapCached: 0 kB' 'Active: 7464700 kB' 'Inactive: 3506192 kB' 'Active(anon): 7069204 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522948 kB' 'Mapped: 172588 kB' 'Shmem: 6549508 kB' 'KReclaimable: 191284 kB' 'Slab: 559872 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368588 kB' 'KernelStack: 12832 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 8144172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.193 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.194 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45608628 kB' 'MemAvailable: 49112120 kB' 'Buffers: 2704 kB' 'Cached: 10448512 kB' 'SwapCached: 0 kB' 'Active: 7464988 kB' 'Inactive: 3506192 kB' 'Active(anon): 7069492 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523236 kB' 'Mapped: 172588 kB' 'Shmem: 6549528 kB' 'KReclaimable: 191284 kB' 'Slab: 559908 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368624 kB' 'KernelStack: 12832 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 8145432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.195 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.196 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:50.197 nr_hugepages=1025 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.197 resv_hugepages=0 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.197 surplus_hugepages=0 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.197 anon_hugepages=0 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45608028 kB' 'MemAvailable: 49111520 kB' 'Buffers: 2704 kB' 'Cached: 10448512 kB' 'SwapCached: 0 kB' 'Active: 7464676 kB' 'Inactive: 3506192 kB' 'Active(anon): 7069180 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522980 kB' 'Mapped: 172588 kB' 'Shmem: 6549528 kB' 'KReclaimable: 191284 kB' 'Slab: 559908 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368624 kB' 'KernelStack: 12928 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 8146416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.197 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.198 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21787636 kB' 'MemUsed: 11089304 kB' 'SwapCached: 0 kB' 'Active: 5513864 kB' 'Inactive: 3357228 kB' 'Active(anon): 5241932 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8718368 kB' 'Mapped: 92428 kB' 'AnonPages: 155856 kB' 'Shmem: 5089208 kB' 'KernelStack: 7336 kB' 'PageTables: 5100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94100 kB' 'Slab: 310384 kB' 'SReclaimable: 94100 kB' 'SUnreclaim: 216284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.199 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.200 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 23817416 kB' 'MemUsed: 3847356 kB' 'SwapCached: 0 kB' 'Active: 1952004 kB' 'Inactive: 148964 kB' 'Active(anon): 1828440 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1732904 kB' 'Mapped: 80168 kB' 'AnonPages: 368152 kB' 'Shmem: 1460376 kB' 'KernelStack: 5912 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97184 kB' 'Slab: 249524 kB' 'SReclaimable: 97184 kB' 'SUnreclaim: 152340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.201 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:50.202 node0=512 expecting 513 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:50.202 node1=513 expecting 512 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:50.202 00:03:50.202 real 0m1.380s 00:03:50.202 user 0m0.594s 00:03:50.202 sys 0m0.746s 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:50.202 16:09:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:50.202 ************************************ 00:03:50.202 END TEST odd_alloc 00:03:50.202 ************************************ 00:03:50.202 16:09:09 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:50.202 16:09:09 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.202 16:09:09 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.202 16:09:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.202 ************************************ 00:03:50.202 START TEST custom_alloc 00:03:50.202 ************************************ 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:50.203 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.462 16:09:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.399 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:51.399 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:51.399 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:51.399 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:51.399 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:51.399 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:51.399 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:51.399 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:51.399 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:51.399 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:51.399 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:51.399 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:51.399 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:51.399 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:51.399 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:51.399 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:51.399 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:51.668 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:51.668 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:51.668 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:51.668 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.668 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.668 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:51.668 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:51.668 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:51.668 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.668 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.668 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.668 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:51.668 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:51.668 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.668 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44566404 kB' 'MemAvailable: 48069896 kB' 'Buffers: 2704 kB' 'Cached: 10448624 kB' 'SwapCached: 0 kB' 'Active: 7465348 kB' 'Inactive: 3506192 kB' 'Active(anon): 7069852 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523500 kB' 'Mapped: 172680 kB' 'Shmem: 6549640 kB' 'KReclaimable: 191284 kB' 'Slab: 559692 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368408 kB' 'KernelStack: 12816 kB' 'PageTables: 7628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 8144416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.669 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44565188 kB' 'MemAvailable: 48068680 kB' 'Buffers: 2704 kB' 'Cached: 10448628 kB' 'SwapCached: 0 kB' 'Active: 7465328 kB' 'Inactive: 3506192 kB' 'Active(anon): 7069832 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523448 kB' 'Mapped: 172680 kB' 'Shmem: 6549644 kB' 'KReclaimable: 191284 kB' 'Slab: 559692 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368408 kB' 'KernelStack: 12832 kB' 'PageTables: 7640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 8144432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.670 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.671 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.672 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44565552 kB' 'MemAvailable: 48069044 kB' 'Buffers: 2704 kB' 'Cached: 10448628 kB' 'SwapCached: 0 kB' 'Active: 7465052 kB' 'Inactive: 3506192 kB' 'Active(anon): 7069556 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523104 kB' 'Mapped: 172564 kB' 'Shmem: 6549644 kB' 'KReclaimable: 191284 kB' 'Slab: 559700 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368416 kB' 'KernelStack: 12800 kB' 'PageTables: 7532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 8144456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.673 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.674 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:51.675 nr_hugepages=1536 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.675 resv_hugepages=0 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.675 surplus_hugepages=0 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.675 anon_hugepages=0 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44565300 kB' 'MemAvailable: 48068792 kB' 'Buffers: 2704 kB' 'Cached: 10448664 kB' 'SwapCached: 0 kB' 'Active: 7465136 kB' 'Inactive: 3506192 kB' 'Active(anon): 7069640 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523236 kB' 'Mapped: 172564 kB' 'Shmem: 6549680 kB' 'KReclaimable: 191284 kB' 'Slab: 559684 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368400 kB' 'KernelStack: 12832 kB' 'PageTables: 7628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 8144476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.675 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.676 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21801476 kB' 'MemUsed: 11075464 kB' 'SwapCached: 0 kB' 'Active: 5512712 kB' 'Inactive: 3357228 kB' 'Active(anon): 5240780 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8718376 kB' 'Mapped: 92428 kB' 'AnonPages: 154724 kB' 'Shmem: 5089216 kB' 'KernelStack: 6840 kB' 'PageTables: 3432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94100 kB' 'Slab: 310216 kB' 'SReclaimable: 94100 kB' 'SUnreclaim: 216116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.677 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.678 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22763824 kB' 'MemUsed: 4900948 kB' 'SwapCached: 0 kB' 'Active: 1952468 kB' 'Inactive: 148964 kB' 'Active(anon): 1828904 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1733036 kB' 'Mapped: 80136 kB' 'AnonPages: 368512 kB' 'Shmem: 1460508 kB' 'KernelStack: 5992 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97184 kB' 'Slab: 249468 kB' 'SReclaimable: 97184 kB' 'SUnreclaim: 152284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.679 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:51.680 node0=512 expecting 512 00:03:51.680 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.681 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.681 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.681 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:51.681 node1=1024 expecting 1024 00:03:51.681 16:09:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:51.681 00:03:51.681 real 0m1.440s 00:03:51.681 user 0m0.618s 00:03:51.681 sys 0m0.784s 00:03:51.681 16:09:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:51.681 16:09:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:51.681 ************************************ 00:03:51.681 END TEST custom_alloc 00:03:51.681 ************************************ 00:03:51.681 16:09:11 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:51.681 16:09:11 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.681 16:09:11 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.681 16:09:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:51.971 ************************************ 00:03:51.971 START TEST no_shrink_alloc 00:03:51.971 ************************************ 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.971 16:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.908 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:52.908 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.908 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:52.908 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:52.908 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:52.908 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:52.908 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:52.908 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:52.908 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:52.908 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:52.908 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:52.908 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:52.908 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:52.908 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:52.908 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:52.908 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:52.908 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45620648 kB' 'MemAvailable: 49124140 kB' 'Buffers: 2704 kB' 'Cached: 10448752 kB' 'SwapCached: 0 kB' 'Active: 7465812 kB' 'Inactive: 3506192 kB' 'Active(anon): 7070316 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523780 kB' 'Mapped: 172672 kB' 'Shmem: 6549768 kB' 'KReclaimable: 191284 kB' 'Slab: 559700 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368416 kB' 'KernelStack: 12864 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8144704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.172 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.173 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45621144 kB' 'MemAvailable: 49124636 kB' 'Buffers: 2704 kB' 'Cached: 10448752 kB' 'SwapCached: 0 kB' 'Active: 7465604 kB' 'Inactive: 3506192 kB' 'Active(anon): 7070108 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523492 kB' 'Mapped: 172656 kB' 'Shmem: 6549768 kB' 'KReclaimable: 191284 kB' 'Slab: 559688 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368404 kB' 'KernelStack: 12848 kB' 'PageTables: 7628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8144720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.174 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.175 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.176 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45621144 kB' 'MemAvailable: 49124636 kB' 'Buffers: 2704 kB' 'Cached: 10448756 kB' 'SwapCached: 0 kB' 'Active: 7465236 kB' 'Inactive: 3506192 kB' 'Active(anon): 7069740 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523072 kB' 'Mapped: 172580 kB' 'Shmem: 6549772 kB' 'KReclaimable: 191284 kB' 'Slab: 559712 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368428 kB' 'KernelStack: 12864 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8144744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.177 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.178 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:53.179 nr_hugepages=1024 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.179 resv_hugepages=0 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.179 surplus_hugepages=0 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.179 anon_hugepages=0 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45621144 kB' 'MemAvailable: 49124636 kB' 'Buffers: 2704 kB' 'Cached: 10448792 kB' 'SwapCached: 0 kB' 'Active: 7465816 kB' 'Inactive: 3506192 kB' 'Active(anon): 7070320 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523616 kB' 'Mapped: 172580 kB' 'Shmem: 6549808 kB' 'KReclaimable: 191284 kB' 'Slab: 559712 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368428 kB' 'KernelStack: 12864 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8144764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.179 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.180 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:53.181 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20757000 kB' 'MemUsed: 12119940 kB' 'SwapCached: 0 kB' 'Active: 5513072 kB' 'Inactive: 3357228 kB' 'Active(anon): 5241140 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8718444 kB' 'Mapped: 92428 kB' 'AnonPages: 154964 kB' 'Shmem: 5089284 kB' 'KernelStack: 6840 kB' 'PageTables: 3428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94100 kB' 'Slab: 310120 kB' 'SReclaimable: 94100 kB' 'SUnreclaim: 216020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.182 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.183 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:53.184 node0=1024 expecting 1024 00:03:53.184 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:53.184 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:53.184 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:53.184 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:53.184 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.184 16:09:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.564 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:54.564 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:54.564 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:54.564 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:54.564 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:54.564 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:54.564 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:54.564 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:54.564 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:54.564 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:54.564 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:54.564 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:54.564 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:54.564 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:54.564 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:54.564 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:54.564 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:54.564 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.564 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45593428 kB' 'MemAvailable: 49096920 kB' 'Buffers: 2704 kB' 'Cached: 10448856 kB' 'SwapCached: 0 kB' 'Active: 7471744 kB' 'Inactive: 3506192 kB' 'Active(anon): 7076248 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529692 kB' 'Mapped: 173536 kB' 'Shmem: 6549872 kB' 'KReclaimable: 191284 kB' 'Slab: 559900 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368616 kB' 'KernelStack: 12848 kB' 'PageTables: 7628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8151056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196116 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.565 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45601016 kB' 'MemAvailable: 49104508 kB' 'Buffers: 2704 kB' 'Cached: 10448856 kB' 'SwapCached: 0 kB' 'Active: 7465392 kB' 'Inactive: 3506192 kB' 'Active(anon): 7069896 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523300 kB' 'Mapped: 173080 kB' 'Shmem: 6549872 kB' 'KReclaimable: 191284 kB' 'Slab: 559868 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368584 kB' 'KernelStack: 12832 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8144956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.566 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.567 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:54.568 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45601008 kB' 'MemAvailable: 49104500 kB' 'Buffers: 2704 kB' 'Cached: 10448876 kB' 'SwapCached: 0 kB' 'Active: 7465560 kB' 'Inactive: 3506192 kB' 'Active(anon): 7070064 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523436 kB' 'Mapped: 172588 kB' 'Shmem: 6549892 kB' 'KReclaimable: 191284 kB' 'Slab: 559860 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368576 kB' 'KernelStack: 12864 kB' 'PageTables: 7632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8144976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.569 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.570 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.571 nr_hugepages=1024 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.571 resv_hugepages=0 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.571 surplus_hugepages=0 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.571 anon_hugepages=0 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45601368 kB' 'MemAvailable: 49104860 kB' 'Buffers: 2704 kB' 'Cached: 10448900 kB' 'SwapCached: 0 kB' 'Active: 7465580 kB' 'Inactive: 3506192 kB' 'Active(anon): 7070084 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523432 kB' 'Mapped: 172588 kB' 'Shmem: 6549916 kB' 'KReclaimable: 191284 kB' 'Slab: 559852 kB' 'SReclaimable: 191284 kB' 'SUnreclaim: 368568 kB' 'KernelStack: 12864 kB' 'PageTables: 7632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8145000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1844828 kB' 'DirectMap2M: 14852096 kB' 'DirectMap1G: 52428800 kB' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.571 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.572 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20734116 kB' 'MemUsed: 12142824 kB' 'SwapCached: 0 kB' 'Active: 5512504 kB' 'Inactive: 3357228 kB' 'Active(anon): 5240572 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8718544 kB' 'Mapped: 92428 kB' 'AnonPages: 154340 kB' 'Shmem: 5089384 kB' 'KernelStack: 6824 kB' 'PageTables: 3344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 94100 kB' 'Slab: 310092 kB' 'SReclaimable: 94100 kB' 'SUnreclaim: 215992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.573 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.574 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:54.575 node0=1024 expecting 1024 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:54.575 00:03:54.575 real 0m2.801s 00:03:54.575 user 0m1.181s 00:03:54.575 sys 0m1.536s 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.575 16:09:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.575 ************************************ 00:03:54.575 END TEST no_shrink_alloc 00:03:54.575 ************************************ 00:03:54.575 16:09:14 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:54.575 16:09:14 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:54.575 16:09:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:54.575 16:09:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.575 16:09:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.575 16:09:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.575 16:09:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.575 16:09:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:54.575 16:09:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.575 16:09:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.575 16:09:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.575 16:09:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.575 16:09:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:54.575 16:09:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:54.575 00:03:54.575 real 0m11.265s 00:03:54.575 user 0m4.447s 00:03:54.575 sys 0m5.731s 00:03:54.575 16:09:14 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.575 16:09:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.575 ************************************ 00:03:54.575 END TEST hugepages 00:03:54.575 ************************************ 00:03:54.575 16:09:14 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:54.575 16:09:14 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.575 16:09:14 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.575 16:09:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:54.575 ************************************ 00:03:54.575 START TEST driver 00:03:54.575 ************************************ 00:03:54.575 16:09:14 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:54.835 * Looking for test storage... 00:03:54.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:54.835 16:09:14 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:54.835 16:09:14 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.835 16:09:14 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.366 16:09:16 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:57.366 16:09:16 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.366 16:09:16 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.366 16:09:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:57.367 ************************************ 00:03:57.367 START TEST guess_driver 00:03:57.367 ************************************ 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:57.367 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:57.367 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:57.367 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:57.367 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:57.367 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:57.367 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:57.367 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:57.367 Looking for driver=vfio-pci 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.367 16:09:16 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:58.302 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.302 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.302 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.302 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.303 16:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.303 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.561 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.561 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.561 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.561 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.561 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.561 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.561 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.561 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:58.561 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.495 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.495 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.495 16:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.495 16:09:19 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:59.495 16:09:19 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:59.495 16:09:19 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.495 16:09:19 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.028 00:04:02.028 real 0m4.651s 00:04:02.028 user 0m1.120s 00:04:02.028 sys 0m1.706s 00:04:02.028 16:09:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.028 16:09:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:02.028 ************************************ 00:04:02.028 END TEST guess_driver 00:04:02.028 ************************************ 00:04:02.028 00:04:02.028 real 0m7.175s 00:04:02.028 user 0m1.672s 00:04:02.028 sys 0m2.717s 00:04:02.028 16:09:21 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.028 16:09:21 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:02.028 ************************************ 00:04:02.028 END TEST driver 00:04:02.028 ************************************ 00:04:02.028 16:09:21 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:02.028 16:09:21 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.028 16:09:21 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.028 16:09:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.028 ************************************ 00:04:02.028 START TEST devices 00:04:02.028 ************************************ 00:04:02.028 16:09:21 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:02.028 * Looking for test storage... 00:04:02.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:02.028 16:09:21 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:02.028 16:09:21 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:02.028 16:09:21 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.028 16:09:21 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.406 16:09:22 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:03.406 16:09:22 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:03.406 16:09:22 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:03.406 16:09:22 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:03.406 16:09:22 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:03.406 16:09:22 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:03.406 16:09:22 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:03.406 16:09:22 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:03.406 16:09:22 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:03.406 16:09:22 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:03.406 16:09:22 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:03.406 16:09:22 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:03.406 16:09:22 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:03.406 16:09:22 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:03.406 16:09:22 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:03.406 16:09:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:03.406 16:09:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:03.406 16:09:22 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:03.406 16:09:22 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:03.406 16:09:22 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:03.406 16:09:22 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:03.406 16:09:22 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:03.406 No valid GPT data, bailing 00:04:03.406 16:09:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:03.406 16:09:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:03.406 16:09:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:03.406 16:09:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:03.406 16:09:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:03.406 16:09:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:03.406 16:09:23 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:03.406 16:09:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:03.406 16:09:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:03.406 16:09:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:03.406 16:09:23 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:03.406 16:09:23 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:03.406 16:09:23 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:03.406 16:09:23 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.406 16:09:23 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.406 16:09:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:03.406 ************************************ 00:04:03.406 START TEST nvme_mount 00:04:03.406 ************************************ 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:03.406 16:09:23 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:04.359 Creating new GPT entries in memory. 00:04:04.359 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:04.359 other utilities. 00:04:04.359 16:09:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:04.359 16:09:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.359 16:09:24 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:04.359 16:09:24 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:04.359 16:09:24 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:05.739 Creating new GPT entries in memory. 00:04:05.739 The operation has completed successfully. 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 503152 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.739 16:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.676 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.676 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:06.676 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:06.676 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.676 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.676 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.676 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.676 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:06.677 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.936 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.936 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:06.936 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.936 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:06.936 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:06.936 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:06.936 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.936 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.936 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.936 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:06.936 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:06.936 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.936 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:07.195 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:07.195 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:07.195 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:07.195 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.195 16:09:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.572 16:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.572 16:09:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:09.947 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:09.947 00:04:09.947 real 0m6.398s 00:04:09.947 user 0m1.591s 00:04:09.947 sys 0m2.392s 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.947 16:09:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:09.947 ************************************ 00:04:09.947 END TEST nvme_mount 00:04:09.947 ************************************ 00:04:09.947 16:09:29 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:09.947 16:09:29 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.947 16:09:29 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.947 16:09:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:09.947 ************************************ 00:04:09.947 START TEST dm_mount 00:04:09.947 ************************************ 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:09.947 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:09.948 16:09:29 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:10.882 Creating new GPT entries in memory. 00:04:10.882 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:10.882 other utilities. 00:04:10.882 16:09:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:10.883 16:09:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.883 16:09:30 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:10.883 16:09:30 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:10.883 16:09:30 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:11.846 Creating new GPT entries in memory. 00:04:11.846 The operation has completed successfully. 00:04:11.846 16:09:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:11.846 16:09:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.846 16:09:31 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:11.846 16:09:31 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:11.846 16:09:31 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:13.225 The operation has completed successfully. 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 505545 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:13.225 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:13.226 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:13.226 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:13.226 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:13.226 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:13.226 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:13.226 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:13.226 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:13.226 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:13.226 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.226 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:13.226 16:09:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:13.226 16:09:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.226 16:09:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:14.160 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:14.419 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:14.419 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:14.419 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:14.419 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:14.419 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:14.419 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:14.419 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:14.419 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:14.419 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.419 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:14.419 16:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:14.419 16:09:33 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.419 16:09:33 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:15.355 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.355 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:15.355 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:15.355 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.355 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.355 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.356 16:09:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.615 16:09:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.615 16:09:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:15.615 16:09:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:15.615 16:09:35 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:15.615 16:09:35 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.615 16:09:35 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:15.615 16:09:35 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:15.615 16:09:35 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.615 16:09:35 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:15.615 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:15.615 16:09:35 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:15.616 16:09:35 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:15.616 00:04:15.616 real 0m5.723s 00:04:15.616 user 0m0.956s 00:04:15.616 sys 0m1.625s 00:04:15.616 16:09:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.616 16:09:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:15.616 ************************************ 00:04:15.616 END TEST dm_mount 00:04:15.616 ************************************ 00:04:15.616 16:09:35 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:15.616 16:09:35 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:15.616 16:09:35 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.616 16:09:35 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.616 16:09:35 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:15.616 16:09:35 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.616 16:09:35 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:15.875 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:15.875 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:15.875 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:15.875 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:15.875 16:09:35 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:15.875 16:09:35 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.875 16:09:35 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:15.875 16:09:35 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.875 16:09:35 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:15.875 16:09:35 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.875 16:09:35 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:15.875 00:04:15.875 real 0m14.011s 00:04:15.875 user 0m3.179s 00:04:15.875 sys 0m5.038s 00:04:15.875 16:09:35 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.875 16:09:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:15.875 ************************************ 00:04:15.875 END TEST devices 00:04:15.875 ************************************ 00:04:15.875 00:04:15.875 real 0m42.992s 00:04:15.875 user 0m12.519s 00:04:15.875 sys 0m18.820s 00:04:15.875 16:09:35 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.875 16:09:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:15.875 ************************************ 00:04:15.875 END TEST setup.sh 00:04:15.875 ************************************ 00:04:15.875 16:09:35 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:17.250 Hugepages 00:04:17.250 node hugesize free / total 00:04:17.250 node0 1048576kB 0 / 0 00:04:17.250 node0 2048kB 2048 / 2048 00:04:17.250 node1 1048576kB 0 / 0 00:04:17.250 node1 2048kB 0 / 0 00:04:17.250 00:04:17.250 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:17.250 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:17.250 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:17.250 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:17.250 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:17.250 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:17.250 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:17.250 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:17.250 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:17.250 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:17.250 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:17.250 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:17.250 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:17.250 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:17.250 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:17.250 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:17.250 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:17.250 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:17.250 16:09:36 -- spdk/autotest.sh@130 -- # uname -s 00:04:17.250 16:09:36 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:17.250 16:09:36 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:17.250 16:09:36 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:18.184 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:18.184 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:18.184 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:18.184 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:18.184 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:18.184 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:18.184 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:18.184 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:18.184 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:18.442 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:18.442 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:18.443 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:18.443 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:18.443 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:18.443 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:18.443 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:19.380 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:19.380 16:09:39 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:20.318 16:09:40 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:20.318 16:09:40 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:20.318 16:09:40 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:20.318 16:09:40 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:20.318 16:09:40 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:20.318 16:09:40 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:20.318 16:09:40 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.318 16:09:40 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:20.318 16:09:40 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:20.575 16:09:40 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:20.575 16:09:40 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:20.575 16:09:40 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.511 Waiting for block devices as requested 00:04:21.511 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:21.769 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:21.769 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:22.028 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:22.028 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:22.028 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:22.028 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:22.287 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:22.287 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:22.287 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:22.287 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:22.545 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:22.545 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:22.545 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:22.545 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:22.802 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:22.802 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:22.802 16:09:42 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:22.802 16:09:42 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:22.802 16:09:42 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:22.802 16:09:42 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:04:22.802 16:09:42 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:22.802 16:09:42 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:22.802 16:09:42 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:23.062 16:09:42 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:23.062 16:09:42 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:23.062 16:09:42 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:23.062 16:09:42 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:23.062 16:09:42 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:23.062 16:09:42 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:23.062 16:09:42 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:23.062 16:09:42 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:23.062 16:09:42 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:23.062 16:09:42 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:23.062 16:09:42 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:23.062 16:09:42 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:23.062 16:09:42 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:23.062 16:09:42 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:23.062 16:09:42 -- common/autotest_common.sh@1557 -- # continue 00:04:23.062 16:09:42 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:23.062 16:09:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:23.062 16:09:42 -- common/autotest_common.sh@10 -- # set +x 00:04:23.062 16:09:42 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:23.062 16:09:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:23.062 16:09:42 -- common/autotest_common.sh@10 -- # set +x 00:04:23.062 16:09:42 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:23.997 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:23.997 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:23.997 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:23.997 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:23.997 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:23.997 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:23.997 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:23.997 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:23.997 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:24.255 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:24.255 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:24.255 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:24.255 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:24.256 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:24.256 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:24.256 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:25.192 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:25.192 16:09:44 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:25.192 16:09:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:25.192 16:09:44 -- common/autotest_common.sh@10 -- # set +x 00:04:25.192 16:09:44 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:25.192 16:09:44 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:25.192 16:09:44 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:25.192 16:09:44 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:25.192 16:09:44 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:25.192 16:09:44 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:25.192 16:09:44 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:25.192 16:09:44 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:25.192 16:09:44 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.192 16:09:44 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:25.192 16:09:44 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:25.450 16:09:44 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:25.450 16:09:44 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:25.450 16:09:44 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:25.450 16:09:44 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:25.450 16:09:44 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:25.450 16:09:44 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:25.450 16:09:44 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:25.450 16:09:44 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:25.450 16:09:44 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:25.450 16:09:44 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=510725 00:04:25.450 16:09:44 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.450 16:09:44 -- common/autotest_common.sh@1598 -- # waitforlisten 510725 00:04:25.450 16:09:44 -- common/autotest_common.sh@831 -- # '[' -z 510725 ']' 00:04:25.450 16:09:44 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.450 16:09:44 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:25.450 16:09:44 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.450 16:09:44 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:25.450 16:09:44 -- common/autotest_common.sh@10 -- # set +x 00:04:25.450 [2024-07-26 16:09:45.068477] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:25.451 [2024-07-26 16:09:45.068645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510725 ] 00:04:25.451 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.451 [2024-07-26 16:09:45.191175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.708 [2024-07-26 16:09:45.443858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.644 16:09:46 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:26.644 16:09:46 -- common/autotest_common.sh@864 -- # return 0 00:04:26.644 16:09:46 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:26.644 16:09:46 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:26.644 16:09:46 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:29.928 nvme0n1 00:04:29.928 16:09:49 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:30.187 [2024-07-26 16:09:49.698629] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:30.187 [2024-07-26 16:09:49.698701] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:30.187 request: 00:04:30.187 { 00:04:30.187 "nvme_ctrlr_name": "nvme0", 00:04:30.187 "password": "test", 00:04:30.187 "method": "bdev_nvme_opal_revert", 00:04:30.187 "req_id": 1 00:04:30.187 } 00:04:30.187 Got JSON-RPC error response 00:04:30.187 response: 00:04:30.187 { 00:04:30.187 "code": -32603, 00:04:30.187 "message": "Internal error" 00:04:30.187 } 00:04:30.187 16:09:49 -- common/autotest_common.sh@1604 -- # true 00:04:30.187 16:09:49 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:30.187 16:09:49 -- common/autotest_common.sh@1608 -- # killprocess 510725 00:04:30.187 16:09:49 -- common/autotest_common.sh@950 -- # '[' -z 510725 ']' 00:04:30.187 16:09:49 -- common/autotest_common.sh@954 -- # kill -0 510725 00:04:30.187 16:09:49 -- common/autotest_common.sh@955 -- # uname 00:04:30.187 16:09:49 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:30.187 16:09:49 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 510725 00:04:30.187 16:09:49 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:30.187 16:09:49 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:30.187 16:09:49 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 510725' 00:04:30.187 killing process with pid 510725 00:04:30.187 16:09:49 -- common/autotest_common.sh@969 -- # kill 510725 00:04:30.187 16:09:49 -- common/autotest_common.sh@974 -- # wait 510725 00:04:34.407 16:09:53 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:34.407 16:09:53 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:34.407 16:09:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:34.407 16:09:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:34.407 16:09:53 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:34.407 16:09:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:34.407 16:09:53 -- common/autotest_common.sh@10 -- # set +x 00:04:34.407 16:09:53 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:34.407 16:09:53 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:34.407 16:09:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.407 16:09:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.407 16:09:53 -- common/autotest_common.sh@10 -- # set +x 00:04:34.407 ************************************ 00:04:34.407 START TEST env 00:04:34.407 ************************************ 00:04:34.407 16:09:53 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:34.407 * Looking for test storage... 00:04:34.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:34.407 16:09:53 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:34.407 16:09:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.407 16:09:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.407 16:09:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.407 ************************************ 00:04:34.407 START TEST env_memory 00:04:34.407 ************************************ 00:04:34.407 16:09:53 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:34.407 00:04:34.407 00:04:34.407 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.407 http://cunit.sourceforge.net/ 00:04:34.407 00:04:34.407 00:04:34.407 Suite: memory 00:04:34.407 Test: alloc and free memory map ...[2024-07-26 16:09:53.612313] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:34.407 passed 00:04:34.407 Test: mem map translation ...[2024-07-26 16:09:53.653271] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:34.407 [2024-07-26 16:09:53.653313] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:34.407 [2024-07-26 16:09:53.653389] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:34.407 [2024-07-26 16:09:53.653420] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:34.407 passed 00:04:34.407 Test: mem map registration ...[2024-07-26 16:09:53.718848] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:34.407 [2024-07-26 16:09:53.718892] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:34.407 passed 00:04:34.407 Test: mem map adjacent registrations ...passed 00:04:34.407 00:04:34.407 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.407 suites 1 1 n/a 0 0 00:04:34.407 tests 4 4 4 0 0 00:04:34.407 asserts 152 152 152 0 n/a 00:04:34.407 00:04:34.407 Elapsed time = 0.230 seconds 00:04:34.407 00:04:34.407 real 0m0.250s 00:04:34.407 user 0m0.233s 00:04:34.407 sys 0m0.016s 00:04:34.407 16:09:53 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.407 16:09:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:34.407 ************************************ 00:04:34.407 END TEST env_memory 00:04:34.407 ************************************ 00:04:34.407 16:09:53 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:34.407 16:09:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.407 16:09:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.407 16:09:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.407 ************************************ 00:04:34.407 START TEST env_vtophys 00:04:34.407 ************************************ 00:04:34.407 16:09:53 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:34.407 EAL: lib.eal log level changed from notice to debug 00:04:34.407 EAL: Detected lcore 0 as core 0 on socket 0 00:04:34.407 EAL: Detected lcore 1 as core 1 on socket 0 00:04:34.407 EAL: Detected lcore 2 as core 2 on socket 0 00:04:34.407 EAL: Detected lcore 3 as core 3 on socket 0 00:04:34.407 EAL: Detected lcore 4 as core 4 on socket 0 00:04:34.407 EAL: Detected lcore 5 as core 5 on socket 0 00:04:34.407 EAL: Detected lcore 6 as core 8 on socket 0 00:04:34.407 EAL: Detected lcore 7 as core 9 on socket 0 00:04:34.407 EAL: Detected lcore 8 as core 10 on socket 0 00:04:34.407 EAL: Detected lcore 9 as core 11 on socket 0 00:04:34.407 EAL: Detected lcore 10 as core 12 on socket 0 00:04:34.407 EAL: Detected lcore 11 as core 13 on socket 0 00:04:34.407 EAL: Detected lcore 12 as core 0 on socket 1 00:04:34.407 EAL: Detected lcore 13 as core 1 on socket 1 00:04:34.407 EAL: Detected lcore 14 as core 2 on socket 1 00:04:34.407 EAL: Detected lcore 15 as core 3 on socket 1 00:04:34.407 EAL: Detected lcore 16 as core 4 on socket 1 00:04:34.407 EAL: Detected lcore 17 as core 5 on socket 1 00:04:34.407 EAL: Detected lcore 18 as core 8 on socket 1 00:04:34.407 EAL: Detected lcore 19 as core 9 on socket 1 00:04:34.407 EAL: Detected lcore 20 as core 10 on socket 1 00:04:34.407 EAL: Detected lcore 21 as core 11 on socket 1 00:04:34.407 EAL: Detected lcore 22 as core 12 on socket 1 00:04:34.407 EAL: Detected lcore 23 as core 13 on socket 1 00:04:34.407 EAL: Detected lcore 24 as core 0 on socket 0 00:04:34.407 EAL: Detected lcore 25 as core 1 on socket 0 00:04:34.407 EAL: Detected lcore 26 as core 2 on socket 0 00:04:34.407 EAL: Detected lcore 27 as core 3 on socket 0 00:04:34.407 EAL: Detected lcore 28 as core 4 on socket 0 00:04:34.407 EAL: Detected lcore 29 as core 5 on socket 0 00:04:34.407 EAL: Detected lcore 30 as core 8 on socket 0 00:04:34.407 EAL: Detected lcore 31 as core 9 on socket 0 00:04:34.407 EAL: Detected lcore 32 as core 10 on socket 0 00:04:34.407 EAL: Detected lcore 33 as core 11 on socket 0 00:04:34.407 EAL: Detected lcore 34 as core 12 on socket 0 00:04:34.407 EAL: Detected lcore 35 as core 13 on socket 0 00:04:34.407 EAL: Detected lcore 36 as core 0 on socket 1 00:04:34.407 EAL: Detected lcore 37 as core 1 on socket 1 00:04:34.407 EAL: Detected lcore 38 as core 2 on socket 1 00:04:34.407 EAL: Detected lcore 39 as core 3 on socket 1 00:04:34.407 EAL: Detected lcore 40 as core 4 on socket 1 00:04:34.407 EAL: Detected lcore 41 as core 5 on socket 1 00:04:34.407 EAL: Detected lcore 42 as core 8 on socket 1 00:04:34.407 EAL: Detected lcore 43 as core 9 on socket 1 00:04:34.407 EAL: Detected lcore 44 as core 10 on socket 1 00:04:34.407 EAL: Detected lcore 45 as core 11 on socket 1 00:04:34.407 EAL: Detected lcore 46 as core 12 on socket 1 00:04:34.407 EAL: Detected lcore 47 as core 13 on socket 1 00:04:34.407 EAL: Maximum logical cores by configuration: 128 00:04:34.407 EAL: Detected CPU lcores: 48 00:04:34.407 EAL: Detected NUMA nodes: 2 00:04:34.407 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:34.407 EAL: Detected shared linkage of DPDK 00:04:34.407 EAL: No shared files mode enabled, IPC will be disabled 00:04:34.407 EAL: Bus pci wants IOVA as 'DC' 00:04:34.407 EAL: Buses did not request a specific IOVA mode. 00:04:34.407 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:34.407 EAL: Selected IOVA mode 'VA' 00:04:34.407 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.407 EAL: Probing VFIO support... 00:04:34.407 EAL: IOMMU type 1 (Type 1) is supported 00:04:34.407 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:34.407 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:34.407 EAL: VFIO support initialized 00:04:34.407 EAL: Ask a virtual area of 0x2e000 bytes 00:04:34.407 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:34.407 EAL: Setting up physically contiguous memory... 00:04:34.407 EAL: Setting maximum number of open files to 524288 00:04:34.408 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:34.408 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:34.408 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:34.408 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.408 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:34.408 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.408 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.408 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:34.408 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:34.408 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.408 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:34.408 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.408 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.408 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:34.408 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:34.408 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.408 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:34.408 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.408 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.408 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:34.408 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:34.408 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.408 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:34.408 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.408 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.408 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:34.408 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:34.408 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:34.408 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.408 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:34.408 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.408 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.408 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:34.408 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:34.408 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.408 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:34.408 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.408 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.408 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:34.408 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:34.408 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.408 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:34.408 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.408 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.408 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:34.408 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:34.408 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.408 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:34.408 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.408 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.408 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:34.408 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:34.408 EAL: Hugepages will be freed exactly as allocated. 00:04:34.408 EAL: No shared files mode enabled, IPC is disabled 00:04:34.408 EAL: No shared files mode enabled, IPC is disabled 00:04:34.408 EAL: TSC frequency is ~2700000 KHz 00:04:34.408 EAL: Main lcore 0 is ready (tid=7f17fe781a40;cpuset=[0]) 00:04:34.408 EAL: Trying to obtain current memory policy. 00:04:34.408 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.408 EAL: Restoring previous memory policy: 0 00:04:34.408 EAL: request: mp_malloc_sync 00:04:34.408 EAL: No shared files mode enabled, IPC is disabled 00:04:34.408 EAL: Heap on socket 0 was expanded by 2MB 00:04:34.408 EAL: No shared files mode enabled, IPC is disabled 00:04:34.408 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:34.408 EAL: Mem event callback 'spdk:(nil)' registered 00:04:34.408 00:04:34.408 00:04:34.408 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.408 http://cunit.sourceforge.net/ 00:04:34.408 00:04:34.408 00:04:34.408 Suite: components_suite 00:04:34.667 Test: vtophys_malloc_test ...passed 00:04:34.667 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:34.667 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.667 EAL: Restoring previous memory policy: 4 00:04:34.667 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.667 EAL: request: mp_malloc_sync 00:04:34.667 EAL: No shared files mode enabled, IPC is disabled 00:04:34.667 EAL: Heap on socket 0 was expanded by 4MB 00:04:34.667 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.667 EAL: request: mp_malloc_sync 00:04:34.667 EAL: No shared files mode enabled, IPC is disabled 00:04:34.667 EAL: Heap on socket 0 was shrunk by 4MB 00:04:34.925 EAL: Trying to obtain current memory policy. 00:04:34.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.925 EAL: Restoring previous memory policy: 4 00:04:34.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.925 EAL: request: mp_malloc_sync 00:04:34.925 EAL: No shared files mode enabled, IPC is disabled 00:04:34.925 EAL: Heap on socket 0 was expanded by 6MB 00:04:34.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.925 EAL: request: mp_malloc_sync 00:04:34.925 EAL: No shared files mode enabled, IPC is disabled 00:04:34.925 EAL: Heap on socket 0 was shrunk by 6MB 00:04:34.925 EAL: Trying to obtain current memory policy. 00:04:34.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.925 EAL: Restoring previous memory policy: 4 00:04:34.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.925 EAL: request: mp_malloc_sync 00:04:34.925 EAL: No shared files mode enabled, IPC is disabled 00:04:34.925 EAL: Heap on socket 0 was expanded by 10MB 00:04:34.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.925 EAL: request: mp_malloc_sync 00:04:34.925 EAL: No shared files mode enabled, IPC is disabled 00:04:34.925 EAL: Heap on socket 0 was shrunk by 10MB 00:04:34.925 EAL: Trying to obtain current memory policy. 00:04:34.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.925 EAL: Restoring previous memory policy: 4 00:04:34.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.925 EAL: request: mp_malloc_sync 00:04:34.925 EAL: No shared files mode enabled, IPC is disabled 00:04:34.925 EAL: Heap on socket 0 was expanded by 18MB 00:04:34.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.925 EAL: request: mp_malloc_sync 00:04:34.925 EAL: No shared files mode enabled, IPC is disabled 00:04:34.925 EAL: Heap on socket 0 was shrunk by 18MB 00:04:34.925 EAL: Trying to obtain current memory policy. 00:04:34.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.925 EAL: Restoring previous memory policy: 4 00:04:34.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.925 EAL: request: mp_malloc_sync 00:04:34.925 EAL: No shared files mode enabled, IPC is disabled 00:04:34.925 EAL: Heap on socket 0 was expanded by 34MB 00:04:34.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.925 EAL: request: mp_malloc_sync 00:04:34.925 EAL: No shared files mode enabled, IPC is disabled 00:04:34.925 EAL: Heap on socket 0 was shrunk by 34MB 00:04:34.925 EAL: Trying to obtain current memory policy. 00:04:34.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.925 EAL: Restoring previous memory policy: 4 00:04:34.926 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.926 EAL: request: mp_malloc_sync 00:04:34.926 EAL: No shared files mode enabled, IPC is disabled 00:04:34.926 EAL: Heap on socket 0 was expanded by 66MB 00:04:35.183 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.183 EAL: request: mp_malloc_sync 00:04:35.183 EAL: No shared files mode enabled, IPC is disabled 00:04:35.183 EAL: Heap on socket 0 was shrunk by 66MB 00:04:35.183 EAL: Trying to obtain current memory policy. 00:04:35.183 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.183 EAL: Restoring previous memory policy: 4 00:04:35.183 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.183 EAL: request: mp_malloc_sync 00:04:35.183 EAL: No shared files mode enabled, IPC is disabled 00:04:35.183 EAL: Heap on socket 0 was expanded by 130MB 00:04:35.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.698 EAL: request: mp_malloc_sync 00:04:35.698 EAL: No shared files mode enabled, IPC is disabled 00:04:35.698 EAL: Heap on socket 0 was shrunk by 130MB 00:04:35.698 EAL: Trying to obtain current memory policy. 00:04:35.698 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.955 EAL: Restoring previous memory policy: 4 00:04:35.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.955 EAL: request: mp_malloc_sync 00:04:35.955 EAL: No shared files mode enabled, IPC is disabled 00:04:35.955 EAL: Heap on socket 0 was expanded by 258MB 00:04:36.212 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.469 EAL: request: mp_malloc_sync 00:04:36.469 EAL: No shared files mode enabled, IPC is disabled 00:04:36.469 EAL: Heap on socket 0 was shrunk by 258MB 00:04:36.727 EAL: Trying to obtain current memory policy. 00:04:36.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.985 EAL: Restoring previous memory policy: 4 00:04:36.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.985 EAL: request: mp_malloc_sync 00:04:36.985 EAL: No shared files mode enabled, IPC is disabled 00:04:36.985 EAL: Heap on socket 0 was expanded by 514MB 00:04:37.918 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.918 EAL: request: mp_malloc_sync 00:04:37.918 EAL: No shared files mode enabled, IPC is disabled 00:04:37.918 EAL: Heap on socket 0 was shrunk by 514MB 00:04:38.850 EAL: Trying to obtain current memory policy. 00:04:38.850 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.108 EAL: Restoring previous memory policy: 4 00:04:39.108 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.108 EAL: request: mp_malloc_sync 00:04:39.108 EAL: No shared files mode enabled, IPC is disabled 00:04:39.108 EAL: Heap on socket 0 was expanded by 1026MB 00:04:41.004 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.262 EAL: request: mp_malloc_sync 00:04:41.262 EAL: No shared files mode enabled, IPC is disabled 00:04:41.262 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:42.636 passed 00:04:42.636 00:04:42.636 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.636 suites 1 1 n/a 0 0 00:04:42.636 tests 2 2 2 0 0 00:04:42.636 asserts 497 497 497 0 n/a 00:04:42.636 00:04:42.636 Elapsed time = 8.295 seconds 00:04:42.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.636 EAL: request: mp_malloc_sync 00:04:42.636 EAL: No shared files mode enabled, IPC is disabled 00:04:42.636 EAL: Heap on socket 0 was shrunk by 2MB 00:04:42.636 EAL: No shared files mode enabled, IPC is disabled 00:04:42.636 EAL: No shared files mode enabled, IPC is disabled 00:04:42.636 EAL: No shared files mode enabled, IPC is disabled 00:04:42.894 00:04:42.894 real 0m8.565s 00:04:42.894 user 0m7.456s 00:04:42.894 sys 0m1.047s 00:04:42.894 16:10:02 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.894 16:10:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:42.894 ************************************ 00:04:42.894 END TEST env_vtophys 00:04:42.894 ************************************ 00:04:42.894 16:10:02 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:42.894 16:10:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.894 16:10:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.894 16:10:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.894 ************************************ 00:04:42.894 START TEST env_pci 00:04:42.894 ************************************ 00:04:42.894 16:10:02 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:42.894 00:04:42.894 00:04:42.894 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.894 http://cunit.sourceforge.net/ 00:04:42.894 00:04:42.894 00:04:42.894 Suite: pci 00:04:42.894 Test: pci_hook ...[2024-07-26 16:10:02.504675] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 512812 has claimed it 00:04:42.894 EAL: Cannot find device (10000:00:01.0) 00:04:42.894 EAL: Failed to attach device on primary process 00:04:42.894 passed 00:04:42.894 00:04:42.894 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.894 suites 1 1 n/a 0 0 00:04:42.894 tests 1 1 1 0 0 00:04:42.894 asserts 25 25 25 0 n/a 00:04:42.894 00:04:42.894 Elapsed time = 0.042 seconds 00:04:42.894 00:04:42.894 real 0m0.094s 00:04:42.894 user 0m0.039s 00:04:42.894 sys 0m0.054s 00:04:42.894 16:10:02 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.894 16:10:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:42.894 ************************************ 00:04:42.894 END TEST env_pci 00:04:42.894 ************************************ 00:04:42.894 16:10:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:42.894 16:10:02 env -- env/env.sh@15 -- # uname 00:04:42.895 16:10:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:42.895 16:10:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:42.895 16:10:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:42.895 16:10:02 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:42.895 16:10:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.895 16:10:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.895 ************************************ 00:04:42.895 START TEST env_dpdk_post_init 00:04:42.895 ************************************ 00:04:42.895 16:10:02 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.154 EAL: Detected CPU lcores: 48 00:04:43.154 EAL: Detected NUMA nodes: 2 00:04:43.154 EAL: Detected shared linkage of DPDK 00:04:43.154 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.154 EAL: Selected IOVA mode 'VA' 00:04:43.154 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.154 EAL: VFIO support initialized 00:04:43.154 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:43.154 EAL: Using IOMMU type 1 (Type 1) 00:04:43.154 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:43.154 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:43.154 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:43.154 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:43.412 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:43.412 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:43.412 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:43.412 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:43.412 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:43.412 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:43.412 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:43.412 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:43.412 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:43.412 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:43.412 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:43.412 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:44.347 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:47.625 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:47.625 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:47.625 Starting DPDK initialization... 00:04:47.625 Starting SPDK post initialization... 00:04:47.625 SPDK NVMe probe 00:04:47.625 Attaching to 0000:88:00.0 00:04:47.625 Attached to 0000:88:00.0 00:04:47.625 Cleaning up... 00:04:47.625 00:04:47.625 real 0m4.563s 00:04:47.625 user 0m3.378s 00:04:47.625 sys 0m0.238s 00:04:47.625 16:10:07 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.625 16:10:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:47.625 ************************************ 00:04:47.625 END TEST env_dpdk_post_init 00:04:47.625 ************************************ 00:04:47.625 16:10:07 env -- env/env.sh@26 -- # uname 00:04:47.625 16:10:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:47.625 16:10:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:47.625 16:10:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.625 16:10:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.625 16:10:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.625 ************************************ 00:04:47.625 START TEST env_mem_callbacks 00:04:47.625 ************************************ 00:04:47.625 16:10:07 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:47.625 EAL: Detected CPU lcores: 48 00:04:47.625 EAL: Detected NUMA nodes: 2 00:04:47.625 EAL: Detected shared linkage of DPDK 00:04:47.625 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:47.625 EAL: Selected IOVA mode 'VA' 00:04:47.625 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.625 EAL: VFIO support initialized 00:04:47.625 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:47.625 00:04:47.625 00:04:47.625 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.625 http://cunit.sourceforge.net/ 00:04:47.625 00:04:47.625 00:04:47.625 Suite: memory 00:04:47.625 Test: test ... 00:04:47.625 register 0x200000200000 2097152 00:04:47.625 malloc 3145728 00:04:47.625 register 0x200000400000 4194304 00:04:47.625 buf 0x2000004fffc0 len 3145728 PASSED 00:04:47.625 malloc 64 00:04:47.625 buf 0x2000004ffec0 len 64 PASSED 00:04:47.625 malloc 4194304 00:04:47.625 register 0x200000800000 6291456 00:04:47.625 buf 0x2000009fffc0 len 4194304 PASSED 00:04:47.625 free 0x2000004fffc0 3145728 00:04:47.625 free 0x2000004ffec0 64 00:04:47.625 unregister 0x200000400000 4194304 PASSED 00:04:47.625 free 0x2000009fffc0 4194304 00:04:47.625 unregister 0x200000800000 6291456 PASSED 00:04:47.625 malloc 8388608 00:04:47.625 register 0x200000400000 10485760 00:04:47.625 buf 0x2000005fffc0 len 8388608 PASSED 00:04:47.625 free 0x2000005fffc0 8388608 00:04:47.625 unregister 0x200000400000 10485760 PASSED 00:04:47.883 passed 00:04:47.883 00:04:47.883 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.883 suites 1 1 n/a 0 0 00:04:47.883 tests 1 1 1 0 0 00:04:47.883 asserts 15 15 15 0 n/a 00:04:47.883 00:04:47.883 Elapsed time = 0.060 seconds 00:04:47.883 00:04:47.883 real 0m0.177s 00:04:47.883 user 0m0.099s 00:04:47.883 sys 0m0.077s 00:04:47.883 16:10:07 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.883 16:10:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:47.883 ************************************ 00:04:47.883 END TEST env_mem_callbacks 00:04:47.883 ************************************ 00:04:47.883 00:04:47.883 real 0m13.939s 00:04:47.883 user 0m11.307s 00:04:47.883 sys 0m1.636s 00:04:47.883 16:10:07 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.883 16:10:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.883 ************************************ 00:04:47.883 END TEST env 00:04:47.883 ************************************ 00:04:47.883 16:10:07 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:47.883 16:10:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.883 16:10:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.883 16:10:07 -- common/autotest_common.sh@10 -- # set +x 00:04:47.883 ************************************ 00:04:47.883 START TEST rpc 00:04:47.883 ************************************ 00:04:47.883 16:10:07 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:47.883 * Looking for test storage... 00:04:47.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:47.883 16:10:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=513589 00:04:47.883 16:10:07 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:47.883 16:10:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.883 16:10:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 513589 00:04:47.883 16:10:07 rpc -- common/autotest_common.sh@831 -- # '[' -z 513589 ']' 00:04:47.883 16:10:07 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.883 16:10:07 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.883 16:10:07 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.883 16:10:07 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.883 16:10:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.883 [2024-07-26 16:10:07.624827] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:47.883 [2024-07-26 16:10:07.624980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513589 ] 00:04:48.142 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.142 [2024-07-26 16:10:07.751982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.399 [2024-07-26 16:10:08.004389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:48.399 [2024-07-26 16:10:08.004477] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 513589' to capture a snapshot of events at runtime. 00:04:48.399 [2024-07-26 16:10:08.004502] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:48.399 [2024-07-26 16:10:08.004531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:48.399 [2024-07-26 16:10:08.004549] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid513589 for offline analysis/debug. 00:04:48.400 [2024-07-26 16:10:08.004601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.336 16:10:08 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.336 16:10:08 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:49.336 16:10:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.336 16:10:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.336 16:10:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:49.336 16:10:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:49.336 16:10:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.336 16:10:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.336 16:10:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.336 ************************************ 00:04:49.336 START TEST rpc_integrity 00:04:49.336 ************************************ 00:04:49.336 16:10:08 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:49.336 16:10:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:49.336 16:10:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.336 16:10:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.336 16:10:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.336 16:10:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:49.336 16:10:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:49.336 16:10:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:49.336 16:10:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:49.336 16:10:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.336 16:10:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.336 16:10:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.336 16:10:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:49.336 16:10:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:49.336 16:10:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.336 16:10:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.336 16:10:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.336 16:10:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:49.336 { 00:04:49.336 "name": "Malloc0", 00:04:49.336 "aliases": [ 00:04:49.336 "51a00dd9-f17c-487d-b74e-c2cb213f5aa5" 00:04:49.336 ], 00:04:49.336 "product_name": "Malloc disk", 00:04:49.336 "block_size": 512, 00:04:49.336 "num_blocks": 16384, 00:04:49.336 "uuid": "51a00dd9-f17c-487d-b74e-c2cb213f5aa5", 00:04:49.336 "assigned_rate_limits": { 00:04:49.336 "rw_ios_per_sec": 0, 00:04:49.336 "rw_mbytes_per_sec": 0, 00:04:49.336 "r_mbytes_per_sec": 0, 00:04:49.336 "w_mbytes_per_sec": 0 00:04:49.336 }, 00:04:49.336 "claimed": false, 00:04:49.336 "zoned": false, 00:04:49.336 "supported_io_types": { 00:04:49.336 "read": true, 00:04:49.336 "write": true, 00:04:49.336 "unmap": true, 00:04:49.336 "flush": true, 00:04:49.336 "reset": true, 00:04:49.336 "nvme_admin": false, 00:04:49.336 "nvme_io": false, 00:04:49.336 "nvme_io_md": false, 00:04:49.336 "write_zeroes": true, 00:04:49.336 "zcopy": true, 00:04:49.336 "get_zone_info": false, 00:04:49.336 "zone_management": false, 00:04:49.336 "zone_append": false, 00:04:49.336 "compare": false, 00:04:49.336 "compare_and_write": false, 00:04:49.336 "abort": true, 00:04:49.336 "seek_hole": false, 00:04:49.336 "seek_data": false, 00:04:49.336 "copy": true, 00:04:49.336 "nvme_iov_md": false 00:04:49.336 }, 00:04:49.336 "memory_domains": [ 00:04:49.336 { 00:04:49.336 "dma_device_id": "system", 00:04:49.336 "dma_device_type": 1 00:04:49.336 }, 00:04:49.336 { 00:04:49.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.336 "dma_device_type": 2 00:04:49.336 } 00:04:49.336 ], 00:04:49.336 "driver_specific": {} 00:04:49.336 } 00:04:49.336 ]' 00:04:49.336 16:10:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:49.336 16:10:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:49.336 16:10:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:49.336 16:10:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.336 16:10:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.336 [2024-07-26 16:10:08.995854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:49.336 [2024-07-26 16:10:08.995935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:49.336 [2024-07-26 16:10:08.995983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:49.336 [2024-07-26 16:10:08.996013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:49.336 [2024-07-26 16:10:08.998792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:49.336 [2024-07-26 16:10:08.998834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:49.336 Passthru0 00:04:49.336 16:10:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.336 16:10:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:49.336 16:10:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.336 16:10:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.336 16:10:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.336 16:10:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:49.336 { 00:04:49.336 "name": "Malloc0", 00:04:49.336 "aliases": [ 00:04:49.336 "51a00dd9-f17c-487d-b74e-c2cb213f5aa5" 00:04:49.336 ], 00:04:49.336 "product_name": "Malloc disk", 00:04:49.336 "block_size": 512, 00:04:49.336 "num_blocks": 16384, 00:04:49.336 "uuid": "51a00dd9-f17c-487d-b74e-c2cb213f5aa5", 00:04:49.336 "assigned_rate_limits": { 00:04:49.336 "rw_ios_per_sec": 0, 00:04:49.336 "rw_mbytes_per_sec": 0, 00:04:49.336 "r_mbytes_per_sec": 0, 00:04:49.336 "w_mbytes_per_sec": 0 00:04:49.336 }, 00:04:49.336 "claimed": true, 00:04:49.336 "claim_type": "exclusive_write", 00:04:49.336 "zoned": false, 00:04:49.336 "supported_io_types": { 00:04:49.336 "read": true, 00:04:49.336 "write": true, 00:04:49.336 "unmap": true, 00:04:49.336 "flush": true, 00:04:49.336 "reset": true, 00:04:49.336 "nvme_admin": false, 00:04:49.336 "nvme_io": false, 00:04:49.336 "nvme_io_md": false, 00:04:49.336 "write_zeroes": true, 00:04:49.336 "zcopy": true, 00:04:49.336 "get_zone_info": false, 00:04:49.336 "zone_management": false, 00:04:49.336 "zone_append": false, 00:04:49.336 "compare": false, 00:04:49.336 "compare_and_write": false, 00:04:49.336 "abort": true, 00:04:49.336 "seek_hole": false, 00:04:49.336 "seek_data": false, 00:04:49.336 "copy": true, 00:04:49.336 "nvme_iov_md": false 00:04:49.336 }, 00:04:49.336 "memory_domains": [ 00:04:49.336 { 00:04:49.336 "dma_device_id": "system", 00:04:49.336 "dma_device_type": 1 00:04:49.336 }, 00:04:49.336 { 00:04:49.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.336 "dma_device_type": 2 00:04:49.336 } 00:04:49.336 ], 00:04:49.336 "driver_specific": {} 00:04:49.336 }, 00:04:49.336 { 00:04:49.336 "name": "Passthru0", 00:04:49.336 "aliases": [ 00:04:49.336 "b03e8a2a-d8e5-5169-9a1f-fba1130c3a39" 00:04:49.336 ], 00:04:49.336 "product_name": "passthru", 00:04:49.336 "block_size": 512, 00:04:49.336 "num_blocks": 16384, 00:04:49.336 "uuid": "b03e8a2a-d8e5-5169-9a1f-fba1130c3a39", 00:04:49.336 "assigned_rate_limits": { 00:04:49.336 "rw_ios_per_sec": 0, 00:04:49.336 "rw_mbytes_per_sec": 0, 00:04:49.337 "r_mbytes_per_sec": 0, 00:04:49.337 "w_mbytes_per_sec": 0 00:04:49.337 }, 00:04:49.337 "claimed": false, 00:04:49.337 "zoned": false, 00:04:49.337 "supported_io_types": { 00:04:49.337 "read": true, 00:04:49.337 "write": true, 00:04:49.337 "unmap": true, 00:04:49.337 "flush": true, 00:04:49.337 "reset": true, 00:04:49.337 "nvme_admin": false, 00:04:49.337 "nvme_io": false, 00:04:49.337 "nvme_io_md": false, 00:04:49.337 "write_zeroes": true, 00:04:49.337 "zcopy": true, 00:04:49.337 "get_zone_info": false, 00:04:49.337 "zone_management": false, 00:04:49.337 "zone_append": false, 00:04:49.337 "compare": false, 00:04:49.337 "compare_and_write": false, 00:04:49.337 "abort": true, 00:04:49.337 "seek_hole": false, 00:04:49.337 "seek_data": false, 00:04:49.337 "copy": true, 00:04:49.337 "nvme_iov_md": false 00:04:49.337 }, 00:04:49.337 "memory_domains": [ 00:04:49.337 { 00:04:49.337 "dma_device_id": "system", 00:04:49.337 "dma_device_type": 1 00:04:49.337 }, 00:04:49.337 { 00:04:49.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.337 "dma_device_type": 2 00:04:49.337 } 00:04:49.337 ], 00:04:49.337 "driver_specific": { 00:04:49.337 "passthru": { 00:04:49.337 "name": "Passthru0", 00:04:49.337 "base_bdev_name": "Malloc0" 00:04:49.337 } 00:04:49.337 } 00:04:49.337 } 00:04:49.337 ]' 00:04:49.337 16:10:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:49.337 16:10:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:49.337 16:10:09 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:49.337 16:10:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.337 16:10:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.337 16:10:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.337 16:10:09 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:49.337 16:10:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.337 16:10:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.337 16:10:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.337 16:10:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:49.337 16:10:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.337 16:10:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.595 16:10:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.595 16:10:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:49.595 16:10:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:49.595 16:10:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:49.595 00:04:49.595 real 0m0.258s 00:04:49.595 user 0m0.155s 00:04:49.595 sys 0m0.017s 00:04:49.595 16:10:09 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.595 16:10:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.595 ************************************ 00:04:49.595 END TEST rpc_integrity 00:04:49.595 ************************************ 00:04:49.595 16:10:09 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:49.595 16:10:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.595 16:10:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.595 16:10:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.595 ************************************ 00:04:49.595 START TEST rpc_plugins 00:04:49.595 ************************************ 00:04:49.595 16:10:09 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:49.595 16:10:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:49.595 16:10:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.595 16:10:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.595 16:10:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.595 16:10:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:49.595 16:10:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:49.595 16:10:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.595 16:10:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.595 16:10:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.595 16:10:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:49.595 { 00:04:49.595 "name": "Malloc1", 00:04:49.595 "aliases": [ 00:04:49.595 "9d92fb76-50bb-4f90-b634-069aa8d1ce19" 00:04:49.595 ], 00:04:49.595 "product_name": "Malloc disk", 00:04:49.595 "block_size": 4096, 00:04:49.595 "num_blocks": 256, 00:04:49.595 "uuid": "9d92fb76-50bb-4f90-b634-069aa8d1ce19", 00:04:49.595 "assigned_rate_limits": { 00:04:49.595 "rw_ios_per_sec": 0, 00:04:49.595 "rw_mbytes_per_sec": 0, 00:04:49.595 "r_mbytes_per_sec": 0, 00:04:49.595 "w_mbytes_per_sec": 0 00:04:49.595 }, 00:04:49.595 "claimed": false, 00:04:49.595 "zoned": false, 00:04:49.595 "supported_io_types": { 00:04:49.595 "read": true, 00:04:49.595 "write": true, 00:04:49.595 "unmap": true, 00:04:49.595 "flush": true, 00:04:49.595 "reset": true, 00:04:49.595 "nvme_admin": false, 00:04:49.595 "nvme_io": false, 00:04:49.595 "nvme_io_md": false, 00:04:49.595 "write_zeroes": true, 00:04:49.595 "zcopy": true, 00:04:49.595 "get_zone_info": false, 00:04:49.595 "zone_management": false, 00:04:49.595 "zone_append": false, 00:04:49.595 "compare": false, 00:04:49.595 "compare_and_write": false, 00:04:49.595 "abort": true, 00:04:49.595 "seek_hole": false, 00:04:49.595 "seek_data": false, 00:04:49.595 "copy": true, 00:04:49.595 "nvme_iov_md": false 00:04:49.595 }, 00:04:49.595 "memory_domains": [ 00:04:49.595 { 00:04:49.595 "dma_device_id": "system", 00:04:49.595 "dma_device_type": 1 00:04:49.595 }, 00:04:49.595 { 00:04:49.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.595 "dma_device_type": 2 00:04:49.595 } 00:04:49.595 ], 00:04:49.595 "driver_specific": {} 00:04:49.595 } 00:04:49.595 ]' 00:04:49.595 16:10:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:49.595 16:10:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:49.595 16:10:09 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:49.595 16:10:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.595 16:10:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.595 16:10:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.595 16:10:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:49.595 16:10:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.595 16:10:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.595 16:10:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.595 16:10:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:49.595 16:10:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:49.595 16:10:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:49.595 00:04:49.595 real 0m0.114s 00:04:49.595 user 0m0.074s 00:04:49.595 sys 0m0.010s 00:04:49.595 16:10:09 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.595 16:10:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.595 ************************************ 00:04:49.595 END TEST rpc_plugins 00:04:49.595 ************************************ 00:04:49.595 16:10:09 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:49.595 16:10:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.596 16:10:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.596 16:10:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.596 ************************************ 00:04:49.596 START TEST rpc_trace_cmd_test 00:04:49.596 ************************************ 00:04:49.596 16:10:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:49.596 16:10:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:49.596 16:10:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:49.596 16:10:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.596 16:10:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:49.854 16:10:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.854 16:10:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:49.854 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid513589", 00:04:49.854 "tpoint_group_mask": "0x8", 00:04:49.854 "iscsi_conn": { 00:04:49.854 "mask": "0x2", 00:04:49.854 "tpoint_mask": "0x0" 00:04:49.854 }, 00:04:49.854 "scsi": { 00:04:49.854 "mask": "0x4", 00:04:49.854 "tpoint_mask": "0x0" 00:04:49.854 }, 00:04:49.854 "bdev": { 00:04:49.854 "mask": "0x8", 00:04:49.854 "tpoint_mask": "0xffffffffffffffff" 00:04:49.854 }, 00:04:49.854 "nvmf_rdma": { 00:04:49.854 "mask": "0x10", 00:04:49.854 "tpoint_mask": "0x0" 00:04:49.854 }, 00:04:49.854 "nvmf_tcp": { 00:04:49.854 "mask": "0x20", 00:04:49.854 "tpoint_mask": "0x0" 00:04:49.854 }, 00:04:49.854 "ftl": { 00:04:49.854 "mask": "0x40", 00:04:49.854 "tpoint_mask": "0x0" 00:04:49.854 }, 00:04:49.854 "blobfs": { 00:04:49.854 "mask": "0x80", 00:04:49.854 "tpoint_mask": "0x0" 00:04:49.854 }, 00:04:49.854 "dsa": { 00:04:49.854 "mask": "0x200", 00:04:49.854 "tpoint_mask": "0x0" 00:04:49.854 }, 00:04:49.854 "thread": { 00:04:49.854 "mask": "0x400", 00:04:49.854 "tpoint_mask": "0x0" 00:04:49.854 }, 00:04:49.854 "nvme_pcie": { 00:04:49.854 "mask": "0x800", 00:04:49.854 "tpoint_mask": "0x0" 00:04:49.854 }, 00:04:49.854 "iaa": { 00:04:49.854 "mask": "0x1000", 00:04:49.854 "tpoint_mask": "0x0" 00:04:49.854 }, 00:04:49.854 "nvme_tcp": { 00:04:49.854 "mask": "0x2000", 00:04:49.854 "tpoint_mask": "0x0" 00:04:49.854 }, 00:04:49.854 "bdev_nvme": { 00:04:49.854 "mask": "0x4000", 00:04:49.854 "tpoint_mask": "0x0" 00:04:49.854 }, 00:04:49.854 "sock": { 00:04:49.854 "mask": "0x8000", 00:04:49.854 "tpoint_mask": "0x0" 00:04:49.854 } 00:04:49.854 }' 00:04:49.854 16:10:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:49.854 16:10:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:49.854 16:10:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:49.854 16:10:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:49.854 16:10:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:49.854 16:10:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:49.854 16:10:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:49.854 16:10:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:49.854 16:10:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:49.854 16:10:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:49.854 00:04:49.854 real 0m0.197s 00:04:49.854 user 0m0.169s 00:04:49.854 sys 0m0.021s 00:04:49.854 16:10:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.854 16:10:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:49.854 ************************************ 00:04:49.854 END TEST rpc_trace_cmd_test 00:04:49.854 ************************************ 00:04:49.854 16:10:09 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:49.854 16:10:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:49.854 16:10:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:49.854 16:10:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.854 16:10:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.854 16:10:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.854 ************************************ 00:04:49.854 START TEST rpc_daemon_integrity 00:04:49.854 ************************************ 00:04:49.854 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:49.854 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:49.854 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.854 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.854 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.854 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:49.854 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.113 { 00:04:50.113 "name": "Malloc2", 00:04:50.113 "aliases": [ 00:04:50.113 "c1c1e86b-ca25-4452-bde0-5d574af04161" 00:04:50.113 ], 00:04:50.113 "product_name": "Malloc disk", 00:04:50.113 "block_size": 512, 00:04:50.113 "num_blocks": 16384, 00:04:50.113 "uuid": "c1c1e86b-ca25-4452-bde0-5d574af04161", 00:04:50.113 "assigned_rate_limits": { 00:04:50.113 "rw_ios_per_sec": 0, 00:04:50.113 "rw_mbytes_per_sec": 0, 00:04:50.113 "r_mbytes_per_sec": 0, 00:04:50.113 "w_mbytes_per_sec": 0 00:04:50.113 }, 00:04:50.113 "claimed": false, 00:04:50.113 "zoned": false, 00:04:50.113 "supported_io_types": { 00:04:50.113 "read": true, 00:04:50.113 "write": true, 00:04:50.113 "unmap": true, 00:04:50.113 "flush": true, 00:04:50.113 "reset": true, 00:04:50.113 "nvme_admin": false, 00:04:50.113 "nvme_io": false, 00:04:50.113 "nvme_io_md": false, 00:04:50.113 "write_zeroes": true, 00:04:50.113 "zcopy": true, 00:04:50.113 "get_zone_info": false, 00:04:50.113 "zone_management": false, 00:04:50.113 "zone_append": false, 00:04:50.113 "compare": false, 00:04:50.113 "compare_and_write": false, 00:04:50.113 "abort": true, 00:04:50.113 "seek_hole": false, 00:04:50.113 "seek_data": false, 00:04:50.113 "copy": true, 00:04:50.113 "nvme_iov_md": false 00:04:50.113 }, 00:04:50.113 "memory_domains": [ 00:04:50.113 { 00:04:50.113 "dma_device_id": "system", 00:04:50.113 "dma_device_type": 1 00:04:50.113 }, 00:04:50.113 { 00:04:50.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.113 "dma_device_type": 2 00:04:50.113 } 00:04:50.113 ], 00:04:50.113 "driver_specific": {} 00:04:50.113 } 00:04:50.113 ]' 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.113 [2024-07-26 16:10:09.705511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:50.113 [2024-07-26 16:10:09.705581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.113 [2024-07-26 16:10:09.705622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:04:50.113 [2024-07-26 16:10:09.705649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.113 [2024-07-26 16:10:09.708384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.113 [2024-07-26 16:10:09.708429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.113 Passthru0 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.113 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.113 { 00:04:50.113 "name": "Malloc2", 00:04:50.113 "aliases": [ 00:04:50.113 "c1c1e86b-ca25-4452-bde0-5d574af04161" 00:04:50.113 ], 00:04:50.113 "product_name": "Malloc disk", 00:04:50.113 "block_size": 512, 00:04:50.113 "num_blocks": 16384, 00:04:50.113 "uuid": "c1c1e86b-ca25-4452-bde0-5d574af04161", 00:04:50.113 "assigned_rate_limits": { 00:04:50.113 "rw_ios_per_sec": 0, 00:04:50.113 "rw_mbytes_per_sec": 0, 00:04:50.113 "r_mbytes_per_sec": 0, 00:04:50.114 "w_mbytes_per_sec": 0 00:04:50.114 }, 00:04:50.114 "claimed": true, 00:04:50.114 "claim_type": "exclusive_write", 00:04:50.114 "zoned": false, 00:04:50.114 "supported_io_types": { 00:04:50.114 "read": true, 00:04:50.114 "write": true, 00:04:50.114 "unmap": true, 00:04:50.114 "flush": true, 00:04:50.114 "reset": true, 00:04:50.114 "nvme_admin": false, 00:04:50.114 "nvme_io": false, 00:04:50.114 "nvme_io_md": false, 00:04:50.114 "write_zeroes": true, 00:04:50.114 "zcopy": true, 00:04:50.114 "get_zone_info": false, 00:04:50.114 "zone_management": false, 00:04:50.114 "zone_append": false, 00:04:50.114 "compare": false, 00:04:50.114 "compare_and_write": false, 00:04:50.114 "abort": true, 00:04:50.114 "seek_hole": false, 00:04:50.114 "seek_data": false, 00:04:50.114 "copy": true, 00:04:50.114 "nvme_iov_md": false 00:04:50.114 }, 00:04:50.114 "memory_domains": [ 00:04:50.114 { 00:04:50.114 "dma_device_id": "system", 00:04:50.114 "dma_device_type": 1 00:04:50.114 }, 00:04:50.114 { 00:04:50.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.114 "dma_device_type": 2 00:04:50.114 } 00:04:50.114 ], 00:04:50.114 "driver_specific": {} 00:04:50.114 }, 00:04:50.114 { 00:04:50.114 "name": "Passthru0", 00:04:50.114 "aliases": [ 00:04:50.114 "64846ec8-32ef-5517-9b43-ece3009035f8" 00:04:50.114 ], 00:04:50.114 "product_name": "passthru", 00:04:50.114 "block_size": 512, 00:04:50.114 "num_blocks": 16384, 00:04:50.114 "uuid": "64846ec8-32ef-5517-9b43-ece3009035f8", 00:04:50.114 "assigned_rate_limits": { 00:04:50.114 "rw_ios_per_sec": 0, 00:04:50.114 "rw_mbytes_per_sec": 0, 00:04:50.114 "r_mbytes_per_sec": 0, 00:04:50.114 "w_mbytes_per_sec": 0 00:04:50.114 }, 00:04:50.114 "claimed": false, 00:04:50.114 "zoned": false, 00:04:50.114 "supported_io_types": { 00:04:50.114 "read": true, 00:04:50.114 "write": true, 00:04:50.114 "unmap": true, 00:04:50.114 "flush": true, 00:04:50.114 "reset": true, 00:04:50.114 "nvme_admin": false, 00:04:50.114 "nvme_io": false, 00:04:50.114 "nvme_io_md": false, 00:04:50.114 "write_zeroes": true, 00:04:50.114 "zcopy": true, 00:04:50.114 "get_zone_info": false, 00:04:50.114 "zone_management": false, 00:04:50.114 "zone_append": false, 00:04:50.114 "compare": false, 00:04:50.114 "compare_and_write": false, 00:04:50.114 "abort": true, 00:04:50.114 "seek_hole": false, 00:04:50.114 "seek_data": false, 00:04:50.114 "copy": true, 00:04:50.114 "nvme_iov_md": false 00:04:50.114 }, 00:04:50.114 "memory_domains": [ 00:04:50.114 { 00:04:50.114 "dma_device_id": "system", 00:04:50.114 "dma_device_type": 1 00:04:50.114 }, 00:04:50.114 { 00:04:50.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.114 "dma_device_type": 2 00:04:50.114 } 00:04:50.114 ], 00:04:50.114 "driver_specific": { 00:04:50.114 "passthru": { 00:04:50.114 "name": "Passthru0", 00:04:50.114 "base_bdev_name": "Malloc2" 00:04:50.114 } 00:04:50.114 } 00:04:50.114 } 00:04:50.114 ]' 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.114 00:04:50.114 real 0m0.260s 00:04:50.114 user 0m0.149s 00:04:50.114 sys 0m0.024s 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.114 16:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.114 ************************************ 00:04:50.114 END TEST rpc_daemon_integrity 00:04:50.114 ************************************ 00:04:50.114 16:10:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:50.114 16:10:09 rpc -- rpc/rpc.sh@84 -- # killprocess 513589 00:04:50.114 16:10:09 rpc -- common/autotest_common.sh@950 -- # '[' -z 513589 ']' 00:04:50.114 16:10:09 rpc -- common/autotest_common.sh@954 -- # kill -0 513589 00:04:50.114 16:10:09 rpc -- common/autotest_common.sh@955 -- # uname 00:04:50.114 16:10:09 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.373 16:10:09 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 513589 00:04:50.373 16:10:09 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.373 16:10:09 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.373 16:10:09 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 513589' 00:04:50.373 killing process with pid 513589 00:04:50.373 16:10:09 rpc -- common/autotest_common.sh@969 -- # kill 513589 00:04:50.373 16:10:09 rpc -- common/autotest_common.sh@974 -- # wait 513589 00:04:52.967 00:04:52.967 real 0m4.906s 00:04:52.967 user 0m5.393s 00:04:52.967 sys 0m0.829s 00:04:52.967 16:10:12 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.967 16:10:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.967 ************************************ 00:04:52.967 END TEST rpc 00:04:52.967 ************************************ 00:04:52.967 16:10:12 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:52.967 16:10:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.967 16:10:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.967 16:10:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.967 ************************************ 00:04:52.967 START TEST skip_rpc 00:04:52.967 ************************************ 00:04:52.967 16:10:12 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:52.967 * Looking for test storage... 00:04:52.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.967 16:10:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:52.967 16:10:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:52.967 16:10:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:52.967 16:10:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.967 16:10:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.967 16:10:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.967 ************************************ 00:04:52.967 START TEST skip_rpc 00:04:52.967 ************************************ 00:04:52.967 16:10:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:52.967 16:10:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=514311 00:04:52.967 16:10:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:52.967 16:10:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.967 16:10:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:52.967 [2024-07-26 16:10:12.593883] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:52.967 [2024-07-26 16:10:12.594038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514311 ] 00:04:52.967 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.967 [2024-07-26 16:10:12.719296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.225 [2024-07-26 16:10:12.978833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 514311 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 514311 ']' 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 514311 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 514311 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 514311' 00:04:58.489 killing process with pid 514311 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 514311 00:04:58.489 16:10:17 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 514311 00:05:00.391 00:05:00.391 real 0m7.519s 00:05:00.391 user 0m7.048s 00:05:00.391 sys 0m0.460s 00:05:00.391 16:10:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.391 16:10:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.391 ************************************ 00:05:00.391 END TEST skip_rpc 00:05:00.391 ************************************ 00:05:00.391 16:10:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:00.391 16:10:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.391 16:10:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.391 16:10:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.391 ************************************ 00:05:00.391 START TEST skip_rpc_with_json 00:05:00.391 ************************************ 00:05:00.391 16:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:00.391 16:10:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:00.391 16:10:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=515264 00:05:00.391 16:10:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.391 16:10:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.391 16:10:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 515264 00:05:00.391 16:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 515264 ']' 00:05:00.391 16:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.391 16:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.391 16:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.391 16:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.391 16:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.649 [2024-07-26 16:10:20.165169] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:00.649 [2024-07-26 16:10:20.165334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid515264 ] 00:05:00.649 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.649 [2024-07-26 16:10:20.286832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.908 [2024-07-26 16:10:20.537108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.843 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.843 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:01.843 16:10:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:01.843 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.843 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 [2024-07-26 16:10:21.389173] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:01.843 request: 00:05:01.843 { 00:05:01.843 "trtype": "tcp", 00:05:01.843 "method": "nvmf_get_transports", 00:05:01.843 "req_id": 1 00:05:01.843 } 00:05:01.843 Got JSON-RPC error response 00:05:01.843 response: 00:05:01.843 { 00:05:01.843 "code": -19, 00:05:01.843 "message": "No such device" 00:05:01.843 } 00:05:01.843 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:01.843 16:10:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:01.843 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.843 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 [2024-07-26 16:10:21.397307] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.843 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.843 16:10:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:01.843 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.843 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.843 16:10:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:01.843 { 00:05:01.843 "subsystems": [ 00:05:01.843 { 00:05:01.843 "subsystem": "keyring", 00:05:01.843 "config": [] 00:05:01.843 }, 00:05:01.843 { 00:05:01.843 "subsystem": "iobuf", 00:05:01.843 "config": [ 00:05:01.843 { 00:05:01.843 "method": "iobuf_set_options", 00:05:01.843 "params": { 00:05:01.843 "small_pool_count": 8192, 00:05:01.843 "large_pool_count": 1024, 00:05:01.843 "small_bufsize": 8192, 00:05:01.843 "large_bufsize": 135168 00:05:01.843 } 00:05:01.843 } 00:05:01.843 ] 00:05:01.843 }, 00:05:01.843 { 00:05:01.843 "subsystem": "sock", 00:05:01.843 "config": [ 00:05:01.843 { 00:05:01.843 "method": "sock_set_default_impl", 00:05:01.843 "params": { 00:05:01.843 "impl_name": "posix" 00:05:01.843 } 00:05:01.843 }, 00:05:01.843 { 00:05:01.843 "method": "sock_impl_set_options", 00:05:01.843 "params": { 00:05:01.843 "impl_name": "ssl", 00:05:01.843 "recv_buf_size": 4096, 00:05:01.843 "send_buf_size": 4096, 00:05:01.843 "enable_recv_pipe": true, 00:05:01.843 "enable_quickack": false, 00:05:01.843 "enable_placement_id": 0, 00:05:01.843 "enable_zerocopy_send_server": true, 00:05:01.843 "enable_zerocopy_send_client": false, 00:05:01.843 "zerocopy_threshold": 0, 00:05:01.843 "tls_version": 0, 00:05:01.843 "enable_ktls": false 00:05:01.843 } 00:05:01.843 }, 00:05:01.843 { 00:05:01.843 "method": "sock_impl_set_options", 00:05:01.843 "params": { 00:05:01.843 "impl_name": "posix", 00:05:01.843 "recv_buf_size": 2097152, 00:05:01.843 "send_buf_size": 2097152, 00:05:01.843 "enable_recv_pipe": true, 00:05:01.843 "enable_quickack": false, 00:05:01.843 "enable_placement_id": 0, 00:05:01.843 "enable_zerocopy_send_server": true, 00:05:01.843 "enable_zerocopy_send_client": false, 00:05:01.843 "zerocopy_threshold": 0, 00:05:01.843 "tls_version": 0, 00:05:01.843 "enable_ktls": false 00:05:01.843 } 00:05:01.843 } 00:05:01.843 ] 00:05:01.843 }, 00:05:01.843 { 00:05:01.844 "subsystem": "vmd", 00:05:01.844 "config": [] 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "subsystem": "accel", 00:05:01.844 "config": [ 00:05:01.844 { 00:05:01.844 "method": "accel_set_options", 00:05:01.844 "params": { 00:05:01.844 "small_cache_size": 128, 00:05:01.844 "large_cache_size": 16, 00:05:01.844 "task_count": 2048, 00:05:01.844 "sequence_count": 2048, 00:05:01.844 "buf_count": 2048 00:05:01.844 } 00:05:01.844 } 00:05:01.844 ] 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "subsystem": "bdev", 00:05:01.844 "config": [ 00:05:01.844 { 00:05:01.844 "method": "bdev_set_options", 00:05:01.844 "params": { 00:05:01.844 "bdev_io_pool_size": 65535, 00:05:01.844 "bdev_io_cache_size": 256, 00:05:01.844 "bdev_auto_examine": true, 00:05:01.844 "iobuf_small_cache_size": 128, 00:05:01.844 "iobuf_large_cache_size": 16 00:05:01.844 } 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "method": "bdev_raid_set_options", 00:05:01.844 "params": { 00:05:01.844 "process_window_size_kb": 1024, 00:05:01.844 "process_max_bandwidth_mb_sec": 0 00:05:01.844 } 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "method": "bdev_iscsi_set_options", 00:05:01.844 "params": { 00:05:01.844 "timeout_sec": 30 00:05:01.844 } 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "method": "bdev_nvme_set_options", 00:05:01.844 "params": { 00:05:01.844 "action_on_timeout": "none", 00:05:01.844 "timeout_us": 0, 00:05:01.844 "timeout_admin_us": 0, 00:05:01.844 "keep_alive_timeout_ms": 10000, 00:05:01.844 "arbitration_burst": 0, 00:05:01.844 "low_priority_weight": 0, 00:05:01.844 "medium_priority_weight": 0, 00:05:01.844 "high_priority_weight": 0, 00:05:01.844 "nvme_adminq_poll_period_us": 10000, 00:05:01.844 "nvme_ioq_poll_period_us": 0, 00:05:01.844 "io_queue_requests": 0, 00:05:01.844 "delay_cmd_submit": true, 00:05:01.844 "transport_retry_count": 4, 00:05:01.844 "bdev_retry_count": 3, 00:05:01.844 "transport_ack_timeout": 0, 00:05:01.844 "ctrlr_loss_timeout_sec": 0, 00:05:01.844 "reconnect_delay_sec": 0, 00:05:01.844 "fast_io_fail_timeout_sec": 0, 00:05:01.844 "disable_auto_failback": false, 00:05:01.844 "generate_uuids": false, 00:05:01.844 "transport_tos": 0, 00:05:01.844 "nvme_error_stat": false, 00:05:01.844 "rdma_srq_size": 0, 00:05:01.844 "io_path_stat": false, 00:05:01.844 "allow_accel_sequence": false, 00:05:01.844 "rdma_max_cq_size": 0, 00:05:01.844 "rdma_cm_event_timeout_ms": 0, 00:05:01.844 "dhchap_digests": [ 00:05:01.844 "sha256", 00:05:01.844 "sha384", 00:05:01.844 "sha512" 00:05:01.844 ], 00:05:01.844 "dhchap_dhgroups": [ 00:05:01.844 "null", 00:05:01.844 "ffdhe2048", 00:05:01.844 "ffdhe3072", 00:05:01.844 "ffdhe4096", 00:05:01.844 "ffdhe6144", 00:05:01.844 "ffdhe8192" 00:05:01.844 ] 00:05:01.844 } 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "method": "bdev_nvme_set_hotplug", 00:05:01.844 "params": { 00:05:01.844 "period_us": 100000, 00:05:01.844 "enable": false 00:05:01.844 } 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "method": "bdev_wait_for_examine" 00:05:01.844 } 00:05:01.844 ] 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "subsystem": "scsi", 00:05:01.844 "config": null 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "subsystem": "scheduler", 00:05:01.844 "config": [ 00:05:01.844 { 00:05:01.844 "method": "framework_set_scheduler", 00:05:01.844 "params": { 00:05:01.844 "name": "static" 00:05:01.844 } 00:05:01.844 } 00:05:01.844 ] 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "subsystem": "vhost_scsi", 00:05:01.844 "config": [] 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "subsystem": "vhost_blk", 00:05:01.844 "config": [] 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "subsystem": "ublk", 00:05:01.844 "config": [] 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "subsystem": "nbd", 00:05:01.844 "config": [] 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "subsystem": "nvmf", 00:05:01.844 "config": [ 00:05:01.844 { 00:05:01.844 "method": "nvmf_set_config", 00:05:01.844 "params": { 00:05:01.844 "discovery_filter": "match_any", 00:05:01.844 "admin_cmd_passthru": { 00:05:01.844 "identify_ctrlr": false 00:05:01.844 } 00:05:01.844 } 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "method": "nvmf_set_max_subsystems", 00:05:01.844 "params": { 00:05:01.844 "max_subsystems": 1024 00:05:01.844 } 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "method": "nvmf_set_crdt", 00:05:01.844 "params": { 00:05:01.844 "crdt1": 0, 00:05:01.844 "crdt2": 0, 00:05:01.844 "crdt3": 0 00:05:01.844 } 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "method": "nvmf_create_transport", 00:05:01.844 "params": { 00:05:01.844 "trtype": "TCP", 00:05:01.844 "max_queue_depth": 128, 00:05:01.844 "max_io_qpairs_per_ctrlr": 127, 00:05:01.844 "in_capsule_data_size": 4096, 00:05:01.844 "max_io_size": 131072, 00:05:01.844 "io_unit_size": 131072, 00:05:01.844 "max_aq_depth": 128, 00:05:01.844 "num_shared_buffers": 511, 00:05:01.844 "buf_cache_size": 4294967295, 00:05:01.844 "dif_insert_or_strip": false, 00:05:01.844 "zcopy": false, 00:05:01.844 "c2h_success": true, 00:05:01.844 "sock_priority": 0, 00:05:01.844 "abort_timeout_sec": 1, 00:05:01.844 "ack_timeout": 0, 00:05:01.844 "data_wr_pool_size": 0 00:05:01.844 } 00:05:01.844 } 00:05:01.844 ] 00:05:01.844 }, 00:05:01.844 { 00:05:01.844 "subsystem": "iscsi", 00:05:01.844 "config": [ 00:05:01.844 { 00:05:01.844 "method": "iscsi_set_options", 00:05:01.844 "params": { 00:05:01.844 "node_base": "iqn.2016-06.io.spdk", 00:05:01.844 "max_sessions": 128, 00:05:01.844 "max_connections_per_session": 2, 00:05:01.844 "max_queue_depth": 64, 00:05:01.844 "default_time2wait": 2, 00:05:01.844 "default_time2retain": 20, 00:05:01.844 "first_burst_length": 8192, 00:05:01.844 "immediate_data": true, 00:05:01.844 "allow_duplicated_isid": false, 00:05:01.844 "error_recovery_level": 0, 00:05:01.844 "nop_timeout": 60, 00:05:01.844 "nop_in_interval": 30, 00:05:01.844 "disable_chap": false, 00:05:01.844 "require_chap": false, 00:05:01.844 "mutual_chap": false, 00:05:01.844 "chap_group": 0, 00:05:01.844 "max_large_datain_per_connection": 64, 00:05:01.844 "max_r2t_per_connection": 4, 00:05:01.844 "pdu_pool_size": 36864, 00:05:01.844 "immediate_data_pool_size": 16384, 00:05:01.844 "data_out_pool_size": 2048 00:05:01.844 } 00:05:01.844 } 00:05:01.844 ] 00:05:01.844 } 00:05:01.844 ] 00:05:01.844 } 00:05:01.844 16:10:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:01.844 16:10:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 515264 00:05:01.844 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 515264 ']' 00:05:01.844 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 515264 00:05:01.844 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:01.844 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.844 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 515264 00:05:01.844 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.844 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.844 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 515264' 00:05:01.844 killing process with pid 515264 00:05:01.844 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 515264 00:05:01.844 16:10:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 515264 00:05:04.373 16:10:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=515682 00:05:04.373 16:10:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:04.373 16:10:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:09.635 16:10:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 515682 00:05:09.635 16:10:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 515682 ']' 00:05:09.635 16:10:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 515682 00:05:09.635 16:10:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:09.635 16:10:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.635 16:10:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 515682 00:05:09.636 16:10:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.636 16:10:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.636 16:10:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 515682' 00:05:09.636 killing process with pid 515682 00:05:09.636 16:10:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 515682 00:05:09.636 16:10:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 515682 00:05:12.166 16:10:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:12.166 16:10:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:12.166 00:05:12.166 real 0m11.525s 00:05:12.166 user 0m10.998s 00:05:12.166 sys 0m1.031s 00:05:12.166 16:10:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.166 16:10:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.167 ************************************ 00:05:12.167 END TEST skip_rpc_with_json 00:05:12.167 ************************************ 00:05:12.167 16:10:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:12.167 16:10:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.167 16:10:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.167 16:10:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.167 ************************************ 00:05:12.167 START TEST skip_rpc_with_delay 00:05:12.167 ************************************ 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:12.167 [2024-07-26 16:10:31.730490] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:12.167 [2024-07-26 16:10:31.730687] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:12.167 00:05:12.167 real 0m0.141s 00:05:12.167 user 0m0.079s 00:05:12.167 sys 0m0.061s 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.167 16:10:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:12.167 ************************************ 00:05:12.167 END TEST skip_rpc_with_delay 00:05:12.167 ************************************ 00:05:12.167 16:10:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:12.167 16:10:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:12.167 16:10:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:12.167 16:10:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.167 16:10:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.167 16:10:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.167 ************************************ 00:05:12.167 START TEST exit_on_failed_rpc_init 00:05:12.167 ************************************ 00:05:12.167 16:10:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:12.167 16:10:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=516660 00:05:12.167 16:10:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.167 16:10:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 516660 00:05:12.167 16:10:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 516660 ']' 00:05:12.167 16:10:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.167 16:10:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.167 16:10:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.167 16:10:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.167 16:10:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.167 [2024-07-26 16:10:31.914102] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:12.167 [2024-07-26 16:10:31.914275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid516660 ] 00:05:12.426 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.426 [2024-07-26 16:10:32.040211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.684 [2024-07-26 16:10:32.291825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.626 16:10:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.626 16:10:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:13.626 16:10:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.626 16:10:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.626 16:10:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:13.626 16:10:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.626 16:10:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.626 16:10:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.626 16:10:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.626 16:10:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.626 16:10:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.626 16:10:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.626 16:10:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.626 16:10:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:13.626 16:10:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.626 [2024-07-26 16:10:33.257024] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:13.626 [2024-07-26 16:10:33.257209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid516815 ] 00:05:13.626 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.883 [2024-07-26 16:10:33.401398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.141 [2024-07-26 16:10:33.654975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.141 [2024-07-26 16:10:33.655142] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:14.141 [2024-07-26 16:10:33.655174] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:14.141 [2024-07-26 16:10:33.655196] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 516660 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 516660 ']' 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 516660 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 516660 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 516660' 00:05:14.400 killing process with pid 516660 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 516660 00:05:14.400 16:10:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 516660 00:05:16.958 00:05:16.958 real 0m4.821s 00:05:16.958 user 0m5.510s 00:05:16.958 sys 0m0.771s 00:05:16.958 16:10:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.958 16:10:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:16.958 ************************************ 00:05:16.958 END TEST exit_on_failed_rpc_init 00:05:16.958 ************************************ 00:05:16.958 16:10:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:16.958 00:05:16.958 real 0m24.236s 00:05:16.958 user 0m23.717s 00:05:16.958 sys 0m2.487s 00:05:16.958 16:10:36 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.958 16:10:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.958 ************************************ 00:05:16.958 END TEST skip_rpc 00:05:16.958 ************************************ 00:05:16.958 16:10:36 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:16.958 16:10:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.958 16:10:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.958 16:10:36 -- common/autotest_common.sh@10 -- # set +x 00:05:16.958 ************************************ 00:05:16.958 START TEST rpc_client 00:05:16.958 ************************************ 00:05:16.958 16:10:36 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:17.217 * Looking for test storage... 00:05:17.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:17.217 16:10:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:17.217 OK 00:05:17.217 16:10:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:17.217 00:05:17.217 real 0m0.102s 00:05:17.217 user 0m0.049s 00:05:17.217 sys 0m0.058s 00:05:17.217 16:10:36 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.217 16:10:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:17.217 ************************************ 00:05:17.217 END TEST rpc_client 00:05:17.217 ************************************ 00:05:17.217 16:10:36 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:17.217 16:10:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.217 16:10:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.217 16:10:36 -- common/autotest_common.sh@10 -- # set +x 00:05:17.217 ************************************ 00:05:17.217 START TEST json_config 00:05:17.217 ************************************ 00:05:17.217 16:10:36 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:17.217 16:10:36 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.217 16:10:36 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.217 16:10:36 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.217 16:10:36 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.217 16:10:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.217 16:10:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.217 16:10:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.217 16:10:36 json_config -- paths/export.sh@5 -- # export PATH 00:05:17.217 16:10:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@47 -- # : 0 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:17.217 16:10:36 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:17.218 INFO: JSON configuration test init 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:17.218 16:10:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:17.218 16:10:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:17.218 16:10:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:17.218 16:10:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.218 16:10:36 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:17.218 16:10:36 json_config -- json_config/common.sh@9 -- # local app=target 00:05:17.218 16:10:36 json_config -- json_config/common.sh@10 -- # shift 00:05:17.218 16:10:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:17.218 16:10:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:17.218 16:10:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:17.218 16:10:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.218 16:10:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.218 16:10:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=517437 00:05:17.218 16:10:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:17.218 16:10:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:17.218 Waiting for target to run... 00:05:17.218 16:10:36 json_config -- json_config/common.sh@25 -- # waitforlisten 517437 /var/tmp/spdk_tgt.sock 00:05:17.218 16:10:36 json_config -- common/autotest_common.sh@831 -- # '[' -z 517437 ']' 00:05:17.218 16:10:36 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.218 16:10:36 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.218 16:10:36 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.218 16:10:36 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.218 16:10:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.477 [2024-07-26 16:10:37.014892] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:17.477 [2024-07-26 16:10:37.015088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid517437 ] 00:05:17.477 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.735 [2024-07-26 16:10:37.433321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.994 [2024-07-26 16:10:37.661900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.252 16:10:37 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.252 16:10:37 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:18.252 16:10:37 json_config -- json_config/common.sh@26 -- # echo '' 00:05:18.252 00:05:18.252 16:10:37 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:18.252 16:10:37 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:18.252 16:10:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.252 16:10:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.252 16:10:37 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:18.252 16:10:37 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:18.252 16:10:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.252 16:10:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.252 16:10:37 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:18.252 16:10:37 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:18.252 16:10:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:22.437 16:10:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:22.437 16:10:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:22.437 16:10:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@51 -- # sort 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:22.437 16:10:41 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:22.437 16:10:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.438 16:10:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.438 16:10:42 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:22.438 16:10:42 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:22.438 16:10:42 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:22.438 16:10:42 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:22.438 16:10:42 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:22.438 16:10:42 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:22.438 16:10:42 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:22.438 16:10:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:22.438 16:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.438 16:10:42 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:22.438 16:10:42 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:22.438 16:10:42 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:22.438 16:10:42 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:22.438 16:10:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:22.696 MallocForNvmf0 00:05:22.696 16:10:42 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:22.696 16:10:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:22.953 MallocForNvmf1 00:05:22.953 16:10:42 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:22.953 16:10:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:23.211 [2024-07-26 16:10:42.747053] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:23.211 16:10:42 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:23.211 16:10:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:23.468 16:10:43 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:23.468 16:10:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:23.725 16:10:43 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:23.725 16:10:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:23.982 16:10:43 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:23.982 16:10:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:24.240 [2024-07-26 16:10:43.746526] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:24.240 16:10:43 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:24.240 16:10:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:24.240 16:10:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.240 16:10:43 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:24.240 16:10:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:24.240 16:10:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.240 16:10:43 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:24.240 16:10:43 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:24.240 16:10:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:24.498 MallocBdevForConfigChangeCheck 00:05:24.498 16:10:44 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:24.498 16:10:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:24.498 16:10:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.498 16:10:44 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:24.498 16:10:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.755 16:10:44 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:24.755 INFO: shutting down applications... 00:05:24.755 16:10:44 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:24.755 16:10:44 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:24.755 16:10:44 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:24.755 16:10:44 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:26.654 Calling clear_iscsi_subsystem 00:05:26.654 Calling clear_nvmf_subsystem 00:05:26.654 Calling clear_nbd_subsystem 00:05:26.654 Calling clear_ublk_subsystem 00:05:26.654 Calling clear_vhost_blk_subsystem 00:05:26.654 Calling clear_vhost_scsi_subsystem 00:05:26.654 Calling clear_bdev_subsystem 00:05:26.654 16:10:46 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:26.654 16:10:46 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:26.654 16:10:46 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:26.654 16:10:46 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:26.654 16:10:46 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:26.654 16:10:46 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:26.911 16:10:46 json_config -- json_config/json_config.sh@349 -- # break 00:05:26.911 16:10:46 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:26.911 16:10:46 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:26.912 16:10:46 json_config -- json_config/common.sh@31 -- # local app=target 00:05:26.912 16:10:46 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:26.912 16:10:46 json_config -- json_config/common.sh@35 -- # [[ -n 517437 ]] 00:05:26.912 16:10:46 json_config -- json_config/common.sh@38 -- # kill -SIGINT 517437 00:05:26.912 16:10:46 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:26.912 16:10:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.912 16:10:46 json_config -- json_config/common.sh@41 -- # kill -0 517437 00:05:26.912 16:10:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.477 16:10:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.477 16:10:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.477 16:10:46 json_config -- json_config/common.sh@41 -- # kill -0 517437 00:05:27.477 16:10:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.736 16:10:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.736 16:10:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.736 16:10:47 json_config -- json_config/common.sh@41 -- # kill -0 517437 00:05:27.736 16:10:47 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.301 16:10:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.301 16:10:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.301 16:10:47 json_config -- json_config/common.sh@41 -- # kill -0 517437 00:05:28.301 16:10:47 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:28.301 16:10:47 json_config -- json_config/common.sh@43 -- # break 00:05:28.301 16:10:47 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:28.301 16:10:47 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:28.301 SPDK target shutdown done 00:05:28.301 16:10:47 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:28.301 INFO: relaunching applications... 00:05:28.301 16:10:47 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.301 16:10:47 json_config -- json_config/common.sh@9 -- # local app=target 00:05:28.301 16:10:47 json_config -- json_config/common.sh@10 -- # shift 00:05:28.301 16:10:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.301 16:10:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.301 16:10:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.301 16:10:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.301 16:10:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.301 16:10:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=518896 00:05:28.301 16:10:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.301 16:10:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.301 Waiting for target to run... 00:05:28.301 16:10:47 json_config -- json_config/common.sh@25 -- # waitforlisten 518896 /var/tmp/spdk_tgt.sock 00:05:28.301 16:10:47 json_config -- common/autotest_common.sh@831 -- # '[' -z 518896 ']' 00:05:28.301 16:10:47 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.301 16:10:47 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.301 16:10:47 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.301 16:10:47 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.301 16:10:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.559 [2024-07-26 16:10:48.081322] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:28.559 [2024-07-26 16:10:48.081471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518896 ] 00:05:28.559 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.125 [2024-07-26 16:10:48.690147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.383 [2024-07-26 16:10:48.915103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.562 [2024-07-26 16:10:52.630902] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.562 [2024-07-26 16:10:52.663514] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:33.562 16:10:53 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.562 16:10:53 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:33.562 16:10:53 json_config -- json_config/common.sh@26 -- # echo '' 00:05:33.562 00:05:33.562 16:10:53 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:33.562 16:10:53 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:33.562 INFO: Checking if target configuration is the same... 00:05:33.562 16:10:53 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.562 16:10:53 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:33.562 16:10:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.562 + '[' 2 -ne 2 ']' 00:05:33.562 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:33.562 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:33.562 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:33.562 +++ basename /dev/fd/62 00:05:33.562 ++ mktemp /tmp/62.XXX 00:05:33.562 + tmp_file_1=/tmp/62.zmt 00:05:33.562 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.562 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:33.562 + tmp_file_2=/tmp/spdk_tgt_config.json.4Rb 00:05:33.562 + ret=0 00:05:33.562 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:34.127 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:34.127 + diff -u /tmp/62.zmt /tmp/spdk_tgt_config.json.4Rb 00:05:34.127 + echo 'INFO: JSON config files are the same' 00:05:34.127 INFO: JSON config files are the same 00:05:34.127 + rm /tmp/62.zmt /tmp/spdk_tgt_config.json.4Rb 00:05:34.127 + exit 0 00:05:34.127 16:10:53 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:34.127 16:10:53 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:34.127 INFO: changing configuration and checking if this can be detected... 00:05:34.127 16:10:53 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:34.127 16:10:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:34.385 16:10:53 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.385 16:10:53 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:34.385 16:10:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.385 + '[' 2 -ne 2 ']' 00:05:34.385 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:34.385 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:34.385 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:34.385 +++ basename /dev/fd/62 00:05:34.385 ++ mktemp /tmp/62.XXX 00:05:34.385 + tmp_file_1=/tmp/62.oGO 00:05:34.385 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.385 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:34.385 + tmp_file_2=/tmp/spdk_tgt_config.json.Yw6 00:05:34.385 + ret=0 00:05:34.385 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:34.643 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:34.643 + diff -u /tmp/62.oGO /tmp/spdk_tgt_config.json.Yw6 00:05:34.643 + ret=1 00:05:34.643 + echo '=== Start of file: /tmp/62.oGO ===' 00:05:34.643 + cat /tmp/62.oGO 00:05:34.643 + echo '=== End of file: /tmp/62.oGO ===' 00:05:34.643 + echo '' 00:05:34.643 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Yw6 ===' 00:05:34.643 + cat /tmp/spdk_tgt_config.json.Yw6 00:05:34.643 + echo '=== End of file: /tmp/spdk_tgt_config.json.Yw6 ===' 00:05:34.643 + echo '' 00:05:34.643 + rm /tmp/62.oGO /tmp/spdk_tgt_config.json.Yw6 00:05:34.643 + exit 1 00:05:34.643 16:10:54 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:34.643 INFO: configuration change detected. 00:05:34.643 16:10:54 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:34.644 16:10:54 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.644 16:10:54 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:34.644 16:10:54 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:34.644 16:10:54 json_config -- json_config/json_config.sh@321 -- # [[ -n 518896 ]] 00:05:34.644 16:10:54 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:34.644 16:10:54 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.644 16:10:54 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:34.644 16:10:54 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:34.644 16:10:54 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:34.644 16:10:54 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:34.644 16:10:54 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:34.644 16:10:54 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.644 16:10:54 json_config -- json_config/json_config.sh@327 -- # killprocess 518896 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@950 -- # '[' -z 518896 ']' 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@954 -- # kill -0 518896 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@955 -- # uname 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 518896 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 518896' 00:05:34.644 killing process with pid 518896 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@969 -- # kill 518896 00:05:34.644 16:10:54 json_config -- common/autotest_common.sh@974 -- # wait 518896 00:05:37.173 16:10:56 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.173 16:10:56 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:37.173 16:10:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:37.173 16:10:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.173 16:10:56 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:37.173 16:10:56 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:37.173 INFO: Success 00:05:37.173 00:05:37.173 real 0m19.983s 00:05:37.173 user 0m21.495s 00:05:37.173 sys 0m2.508s 00:05:37.173 16:10:56 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.174 16:10:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.174 ************************************ 00:05:37.174 END TEST json_config 00:05:37.174 ************************************ 00:05:37.174 16:10:56 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:37.174 16:10:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.174 16:10:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.174 16:10:56 -- common/autotest_common.sh@10 -- # set +x 00:05:37.174 ************************************ 00:05:37.174 START TEST json_config_extra_key 00:05:37.174 ************************************ 00:05:37.174 16:10:56 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:37.435 16:10:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:37.435 16:10:56 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.435 16:10:56 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.435 16:10:56 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.435 16:10:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.435 16:10:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.435 16:10:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.435 16:10:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:37.435 16:10:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:37.435 16:10:56 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:37.435 16:10:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:37.435 16:10:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:37.435 16:10:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:37.435 16:10:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:37.435 16:10:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:37.435 16:10:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:37.435 16:10:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:37.435 16:10:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:37.435 16:10:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:37.435 16:10:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:37.435 16:10:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:37.435 INFO: launching applications... 00:05:37.435 16:10:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:37.435 16:10:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:37.435 16:10:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:37.435 16:10:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:37.435 16:10:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:37.435 16:10:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:37.435 16:10:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.435 16:10:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.435 16:10:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=520076 00:05:37.435 16:10:56 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:37.435 16:10:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:37.435 Waiting for target to run... 00:05:37.435 16:10:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 520076 /var/tmp/spdk_tgt.sock 00:05:37.435 16:10:56 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 520076 ']' 00:05:37.435 16:10:56 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.435 16:10:56 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.435 16:10:56 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.435 16:10:56 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.435 16:10:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:37.435 [2024-07-26 16:10:57.043648] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:37.435 [2024-07-26 16:10:57.043802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520076 ] 00:05:37.435 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.037 [2024-07-26 16:10:57.645586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.295 [2024-07-26 16:10:57.886272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.861 16:10:58 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.861 16:10:58 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:38.861 16:10:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:38.862 00:05:38.862 16:10:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:38.862 INFO: shutting down applications... 00:05:38.862 16:10:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:38.862 16:10:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:38.862 16:10:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:38.862 16:10:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 520076 ]] 00:05:38.862 16:10:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 520076 00:05:38.862 16:10:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:38.862 16:10:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.862 16:10:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 520076 00:05:38.862 16:10:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.428 16:10:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.428 16:10:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.428 16:10:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 520076 00:05:39.428 16:10:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.995 16:10:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.995 16:10:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.995 16:10:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 520076 00:05:39.995 16:10:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.563 16:11:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.563 16:11:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.563 16:11:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 520076 00:05:40.563 16:11:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.128 16:11:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.128 16:11:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.128 16:11:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 520076 00:05:41.128 16:11:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.386 16:11:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.386 16:11:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.386 16:11:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 520076 00:05:41.386 16:11:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.952 16:11:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.952 16:11:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.952 16:11:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 520076 00:05:41.952 16:11:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:41.952 16:11:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:41.952 16:11:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:41.952 16:11:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:41.952 SPDK target shutdown done 00:05:41.952 16:11:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:41.952 Success 00:05:41.952 00:05:41.952 real 0m4.707s 00:05:41.952 user 0m4.258s 00:05:41.952 sys 0m0.812s 00:05:41.952 16:11:01 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.952 16:11:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.952 ************************************ 00:05:41.952 END TEST json_config_extra_key 00:05:41.952 ************************************ 00:05:41.952 16:11:01 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.952 16:11:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.952 16:11:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.952 16:11:01 -- common/autotest_common.sh@10 -- # set +x 00:05:41.952 ************************************ 00:05:41.952 START TEST alias_rpc 00:05:41.952 ************************************ 00:05:41.952 16:11:01 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.952 * Looking for test storage... 00:05:41.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:41.952 16:11:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:41.952 16:11:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=520671 00:05:41.952 16:11:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.952 16:11:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 520671 00:05:41.952 16:11:01 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 520671 ']' 00:05:41.952 16:11:01 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.952 16:11:01 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.952 16:11:01 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.952 16:11:01 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.953 16:11:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.211 [2024-07-26 16:11:01.795404] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:42.211 [2024-07-26 16:11:01.795552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520671 ] 00:05:42.211 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.211 [2024-07-26 16:11:01.914945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.469 [2024-07-26 16:11:02.169461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.404 16:11:03 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.404 16:11:03 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:43.404 16:11:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:43.662 16:11:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 520671 00:05:43.662 16:11:03 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 520671 ']' 00:05:43.662 16:11:03 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 520671 00:05:43.662 16:11:03 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:43.662 16:11:03 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.662 16:11:03 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 520671 00:05:43.662 16:11:03 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.662 16:11:03 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.662 16:11:03 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 520671' 00:05:43.662 killing process with pid 520671 00:05:43.662 16:11:03 alias_rpc -- common/autotest_common.sh@969 -- # kill 520671 00:05:43.662 16:11:03 alias_rpc -- common/autotest_common.sh@974 -- # wait 520671 00:05:46.198 00:05:46.198 real 0m4.232s 00:05:46.198 user 0m4.327s 00:05:46.198 sys 0m0.612s 00:05:46.198 16:11:05 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.198 16:11:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.198 ************************************ 00:05:46.198 END TEST alias_rpc 00:05:46.198 ************************************ 00:05:46.198 16:11:05 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:46.198 16:11:05 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.198 16:11:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.198 16:11:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.198 16:11:05 -- common/autotest_common.sh@10 -- # set +x 00:05:46.198 ************************************ 00:05:46.198 START TEST spdkcli_tcp 00:05:46.198 ************************************ 00:05:46.198 16:11:05 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.457 * Looking for test storage... 00:05:46.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:46.457 16:11:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:46.457 16:11:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:46.457 16:11:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:46.457 16:11:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:46.457 16:11:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:46.457 16:11:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:46.457 16:11:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:46.457 16:11:05 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:46.457 16:11:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.457 16:11:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=521250 00:05:46.457 16:11:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:46.457 16:11:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 521250 00:05:46.457 16:11:05 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 521250 ']' 00:05:46.457 16:11:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.457 16:11:05 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.457 16:11:05 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.457 16:11:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.457 16:11:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.457 [2024-07-26 16:11:06.083318] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:46.457 [2024-07-26 16:11:06.083498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521250 ] 00:05:46.457 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.457 [2024-07-26 16:11:06.214821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.715 [2024-07-26 16:11:06.477270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.715 [2024-07-26 16:11:06.477276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.650 16:11:07 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.650 16:11:07 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:47.650 16:11:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=521398 00:05:47.650 16:11:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:47.650 16:11:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:47.908 [ 00:05:47.908 "bdev_malloc_delete", 00:05:47.908 "bdev_malloc_create", 00:05:47.908 "bdev_null_resize", 00:05:47.908 "bdev_null_delete", 00:05:47.908 "bdev_null_create", 00:05:47.908 "bdev_nvme_cuse_unregister", 00:05:47.908 "bdev_nvme_cuse_register", 00:05:47.908 "bdev_opal_new_user", 00:05:47.908 "bdev_opal_set_lock_state", 00:05:47.908 "bdev_opal_delete", 00:05:47.908 "bdev_opal_get_info", 00:05:47.908 "bdev_opal_create", 00:05:47.908 "bdev_nvme_opal_revert", 00:05:47.908 "bdev_nvme_opal_init", 00:05:47.908 "bdev_nvme_send_cmd", 00:05:47.908 "bdev_nvme_get_path_iostat", 00:05:47.908 "bdev_nvme_get_mdns_discovery_info", 00:05:47.908 "bdev_nvme_stop_mdns_discovery", 00:05:47.908 "bdev_nvme_start_mdns_discovery", 00:05:47.908 "bdev_nvme_set_multipath_policy", 00:05:47.908 "bdev_nvme_set_preferred_path", 00:05:47.908 "bdev_nvme_get_io_paths", 00:05:47.908 "bdev_nvme_remove_error_injection", 00:05:47.908 "bdev_nvme_add_error_injection", 00:05:47.908 "bdev_nvme_get_discovery_info", 00:05:47.908 "bdev_nvme_stop_discovery", 00:05:47.908 "bdev_nvme_start_discovery", 00:05:47.908 "bdev_nvme_get_controller_health_info", 00:05:47.908 "bdev_nvme_disable_controller", 00:05:47.908 "bdev_nvme_enable_controller", 00:05:47.908 "bdev_nvme_reset_controller", 00:05:47.908 "bdev_nvme_get_transport_statistics", 00:05:47.908 "bdev_nvme_apply_firmware", 00:05:47.908 "bdev_nvme_detach_controller", 00:05:47.908 "bdev_nvme_get_controllers", 00:05:47.908 "bdev_nvme_attach_controller", 00:05:47.908 "bdev_nvme_set_hotplug", 00:05:47.908 "bdev_nvme_set_options", 00:05:47.908 "bdev_passthru_delete", 00:05:47.908 "bdev_passthru_create", 00:05:47.908 "bdev_lvol_set_parent_bdev", 00:05:47.908 "bdev_lvol_set_parent", 00:05:47.908 "bdev_lvol_check_shallow_copy", 00:05:47.908 "bdev_lvol_start_shallow_copy", 00:05:47.908 "bdev_lvol_grow_lvstore", 00:05:47.908 "bdev_lvol_get_lvols", 00:05:47.908 "bdev_lvol_get_lvstores", 00:05:47.908 "bdev_lvol_delete", 00:05:47.908 "bdev_lvol_set_read_only", 00:05:47.908 "bdev_lvol_resize", 00:05:47.908 "bdev_lvol_decouple_parent", 00:05:47.908 "bdev_lvol_inflate", 00:05:47.908 "bdev_lvol_rename", 00:05:47.908 "bdev_lvol_clone_bdev", 00:05:47.908 "bdev_lvol_clone", 00:05:47.908 "bdev_lvol_snapshot", 00:05:47.908 "bdev_lvol_create", 00:05:47.908 "bdev_lvol_delete_lvstore", 00:05:47.908 "bdev_lvol_rename_lvstore", 00:05:47.908 "bdev_lvol_create_lvstore", 00:05:47.908 "bdev_raid_set_options", 00:05:47.908 "bdev_raid_remove_base_bdev", 00:05:47.908 "bdev_raid_add_base_bdev", 00:05:47.908 "bdev_raid_delete", 00:05:47.908 "bdev_raid_create", 00:05:47.908 "bdev_raid_get_bdevs", 00:05:47.908 "bdev_error_inject_error", 00:05:47.908 "bdev_error_delete", 00:05:47.908 "bdev_error_create", 00:05:47.908 "bdev_split_delete", 00:05:47.908 "bdev_split_create", 00:05:47.908 "bdev_delay_delete", 00:05:47.908 "bdev_delay_create", 00:05:47.908 "bdev_delay_update_latency", 00:05:47.908 "bdev_zone_block_delete", 00:05:47.908 "bdev_zone_block_create", 00:05:47.908 "blobfs_create", 00:05:47.908 "blobfs_detect", 00:05:47.908 "blobfs_set_cache_size", 00:05:47.908 "bdev_aio_delete", 00:05:47.908 "bdev_aio_rescan", 00:05:47.908 "bdev_aio_create", 00:05:47.908 "bdev_ftl_set_property", 00:05:47.908 "bdev_ftl_get_properties", 00:05:47.908 "bdev_ftl_get_stats", 00:05:47.908 "bdev_ftl_unmap", 00:05:47.908 "bdev_ftl_unload", 00:05:47.908 "bdev_ftl_delete", 00:05:47.908 "bdev_ftl_load", 00:05:47.908 "bdev_ftl_create", 00:05:47.908 "bdev_virtio_attach_controller", 00:05:47.908 "bdev_virtio_scsi_get_devices", 00:05:47.908 "bdev_virtio_detach_controller", 00:05:47.908 "bdev_virtio_blk_set_hotplug", 00:05:47.908 "bdev_iscsi_delete", 00:05:47.908 "bdev_iscsi_create", 00:05:47.908 "bdev_iscsi_set_options", 00:05:47.908 "accel_error_inject_error", 00:05:47.908 "ioat_scan_accel_module", 00:05:47.908 "dsa_scan_accel_module", 00:05:47.908 "iaa_scan_accel_module", 00:05:47.908 "keyring_file_remove_key", 00:05:47.908 "keyring_file_add_key", 00:05:47.908 "keyring_linux_set_options", 00:05:47.908 "iscsi_get_histogram", 00:05:47.908 "iscsi_enable_histogram", 00:05:47.908 "iscsi_set_options", 00:05:47.908 "iscsi_get_auth_groups", 00:05:47.908 "iscsi_auth_group_remove_secret", 00:05:47.908 "iscsi_auth_group_add_secret", 00:05:47.908 "iscsi_delete_auth_group", 00:05:47.908 "iscsi_create_auth_group", 00:05:47.908 "iscsi_set_discovery_auth", 00:05:47.908 "iscsi_get_options", 00:05:47.908 "iscsi_target_node_request_logout", 00:05:47.908 "iscsi_target_node_set_redirect", 00:05:47.908 "iscsi_target_node_set_auth", 00:05:47.908 "iscsi_target_node_add_lun", 00:05:47.908 "iscsi_get_stats", 00:05:47.908 "iscsi_get_connections", 00:05:47.908 "iscsi_portal_group_set_auth", 00:05:47.908 "iscsi_start_portal_group", 00:05:47.908 "iscsi_delete_portal_group", 00:05:47.908 "iscsi_create_portal_group", 00:05:47.908 "iscsi_get_portal_groups", 00:05:47.908 "iscsi_delete_target_node", 00:05:47.908 "iscsi_target_node_remove_pg_ig_maps", 00:05:47.908 "iscsi_target_node_add_pg_ig_maps", 00:05:47.908 "iscsi_create_target_node", 00:05:47.908 "iscsi_get_target_nodes", 00:05:47.908 "iscsi_delete_initiator_group", 00:05:47.908 "iscsi_initiator_group_remove_initiators", 00:05:47.908 "iscsi_initiator_group_add_initiators", 00:05:47.908 "iscsi_create_initiator_group", 00:05:47.908 "iscsi_get_initiator_groups", 00:05:47.908 "nvmf_set_crdt", 00:05:47.908 "nvmf_set_config", 00:05:47.908 "nvmf_set_max_subsystems", 00:05:47.908 "nvmf_stop_mdns_prr", 00:05:47.908 "nvmf_publish_mdns_prr", 00:05:47.908 "nvmf_subsystem_get_listeners", 00:05:47.908 "nvmf_subsystem_get_qpairs", 00:05:47.908 "nvmf_subsystem_get_controllers", 00:05:47.908 "nvmf_get_stats", 00:05:47.908 "nvmf_get_transports", 00:05:47.908 "nvmf_create_transport", 00:05:47.908 "nvmf_get_targets", 00:05:47.908 "nvmf_delete_target", 00:05:47.908 "nvmf_create_target", 00:05:47.908 "nvmf_subsystem_allow_any_host", 00:05:47.908 "nvmf_subsystem_remove_host", 00:05:47.908 "nvmf_subsystem_add_host", 00:05:47.908 "nvmf_ns_remove_host", 00:05:47.908 "nvmf_ns_add_host", 00:05:47.908 "nvmf_subsystem_remove_ns", 00:05:47.908 "nvmf_subsystem_add_ns", 00:05:47.908 "nvmf_subsystem_listener_set_ana_state", 00:05:47.908 "nvmf_discovery_get_referrals", 00:05:47.908 "nvmf_discovery_remove_referral", 00:05:47.908 "nvmf_discovery_add_referral", 00:05:47.908 "nvmf_subsystem_remove_listener", 00:05:47.908 "nvmf_subsystem_add_listener", 00:05:47.908 "nvmf_delete_subsystem", 00:05:47.908 "nvmf_create_subsystem", 00:05:47.908 "nvmf_get_subsystems", 00:05:47.908 "env_dpdk_get_mem_stats", 00:05:47.908 "nbd_get_disks", 00:05:47.908 "nbd_stop_disk", 00:05:47.908 "nbd_start_disk", 00:05:47.909 "ublk_recover_disk", 00:05:47.909 "ublk_get_disks", 00:05:47.909 "ublk_stop_disk", 00:05:47.909 "ublk_start_disk", 00:05:47.909 "ublk_destroy_target", 00:05:47.909 "ublk_create_target", 00:05:47.909 "virtio_blk_create_transport", 00:05:47.909 "virtio_blk_get_transports", 00:05:47.909 "vhost_controller_set_coalescing", 00:05:47.909 "vhost_get_controllers", 00:05:47.909 "vhost_delete_controller", 00:05:47.909 "vhost_create_blk_controller", 00:05:47.909 "vhost_scsi_controller_remove_target", 00:05:47.909 "vhost_scsi_controller_add_target", 00:05:47.909 "vhost_start_scsi_controller", 00:05:47.909 "vhost_create_scsi_controller", 00:05:47.909 "thread_set_cpumask", 00:05:47.909 "framework_get_governor", 00:05:47.909 "framework_get_scheduler", 00:05:47.909 "framework_set_scheduler", 00:05:47.909 "framework_get_reactors", 00:05:47.909 "thread_get_io_channels", 00:05:47.909 "thread_get_pollers", 00:05:47.909 "thread_get_stats", 00:05:47.909 "framework_monitor_context_switch", 00:05:47.909 "spdk_kill_instance", 00:05:47.909 "log_enable_timestamps", 00:05:47.909 "log_get_flags", 00:05:47.909 "log_clear_flag", 00:05:47.909 "log_set_flag", 00:05:47.909 "log_get_level", 00:05:47.909 "log_set_level", 00:05:47.909 "log_get_print_level", 00:05:47.909 "log_set_print_level", 00:05:47.909 "framework_enable_cpumask_locks", 00:05:47.909 "framework_disable_cpumask_locks", 00:05:47.909 "framework_wait_init", 00:05:47.909 "framework_start_init", 00:05:47.909 "scsi_get_devices", 00:05:47.909 "bdev_get_histogram", 00:05:47.909 "bdev_enable_histogram", 00:05:47.909 "bdev_set_qos_limit", 00:05:47.909 "bdev_set_qd_sampling_period", 00:05:47.909 "bdev_get_bdevs", 00:05:47.909 "bdev_reset_iostat", 00:05:47.909 "bdev_get_iostat", 00:05:47.909 "bdev_examine", 00:05:47.909 "bdev_wait_for_examine", 00:05:47.909 "bdev_set_options", 00:05:47.909 "notify_get_notifications", 00:05:47.909 "notify_get_types", 00:05:47.909 "accel_get_stats", 00:05:47.909 "accel_set_options", 00:05:47.909 "accel_set_driver", 00:05:47.909 "accel_crypto_key_destroy", 00:05:47.909 "accel_crypto_keys_get", 00:05:47.909 "accel_crypto_key_create", 00:05:47.909 "accel_assign_opc", 00:05:47.909 "accel_get_module_info", 00:05:47.909 "accel_get_opc_assignments", 00:05:47.909 "vmd_rescan", 00:05:47.909 "vmd_remove_device", 00:05:47.909 "vmd_enable", 00:05:47.909 "sock_get_default_impl", 00:05:47.909 "sock_set_default_impl", 00:05:47.909 "sock_impl_set_options", 00:05:47.909 "sock_impl_get_options", 00:05:47.909 "iobuf_get_stats", 00:05:47.909 "iobuf_set_options", 00:05:47.909 "framework_get_pci_devices", 00:05:47.909 "framework_get_config", 00:05:47.909 "framework_get_subsystems", 00:05:47.909 "trace_get_info", 00:05:47.909 "trace_get_tpoint_group_mask", 00:05:47.909 "trace_disable_tpoint_group", 00:05:47.909 "trace_enable_tpoint_group", 00:05:47.909 "trace_clear_tpoint_mask", 00:05:47.909 "trace_set_tpoint_mask", 00:05:47.909 "keyring_get_keys", 00:05:47.909 "spdk_get_version", 00:05:47.909 "rpc_get_methods" 00:05:47.909 ] 00:05:47.909 16:11:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:47.909 16:11:07 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:47.909 16:11:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.909 16:11:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:47.909 16:11:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 521250 00:05:47.909 16:11:07 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 521250 ']' 00:05:47.909 16:11:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 521250 00:05:47.909 16:11:07 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:47.909 16:11:07 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.909 16:11:07 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 521250 00:05:47.909 16:11:07 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.909 16:11:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.909 16:11:07 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 521250' 00:05:47.909 killing process with pid 521250 00:05:47.909 16:11:07 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 521250 00:05:48.167 16:11:07 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 521250 00:05:50.697 00:05:50.697 real 0m4.252s 00:05:50.697 user 0m7.544s 00:05:50.697 sys 0m0.660s 00:05:50.697 16:11:10 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.697 16:11:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.697 ************************************ 00:05:50.697 END TEST spdkcli_tcp 00:05:50.697 ************************************ 00:05:50.697 16:11:10 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.697 16:11:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.697 16:11:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.697 16:11:10 -- common/autotest_common.sh@10 -- # set +x 00:05:50.697 ************************************ 00:05:50.697 START TEST dpdk_mem_utility 00:05:50.697 ************************************ 00:05:50.697 16:11:10 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.697 * Looking for test storage... 00:05:50.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:50.697 16:11:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:50.697 16:11:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=521847 00:05:50.697 16:11:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.697 16:11:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 521847 00:05:50.697 16:11:10 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 521847 ']' 00:05:50.697 16:11:10 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.697 16:11:10 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.697 16:11:10 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.697 16:11:10 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.697 16:11:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:50.697 [2024-07-26 16:11:10.365794] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:50.697 [2024-07-26 16:11:10.365956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521847 ] 00:05:50.697 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.955 [2024-07-26 16:11:10.494012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.213 [2024-07-26 16:11:10.752746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.149 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.149 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:52.149 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:52.149 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:52.149 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.149 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.149 { 00:05:52.149 "filename": "/tmp/spdk_mem_dump.txt" 00:05:52.149 } 00:05:52.149 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.149 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:52.149 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:52.149 1 heaps totaling size 820.000000 MiB 00:05:52.149 size: 820.000000 MiB heap id: 0 00:05:52.149 end heaps---------- 00:05:52.149 8 mempools totaling size 598.116089 MiB 00:05:52.149 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:52.149 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:52.149 size: 84.521057 MiB name: bdev_io_521847 00:05:52.149 size: 51.011292 MiB name: evtpool_521847 00:05:52.149 size: 50.003479 MiB name: msgpool_521847 00:05:52.149 size: 21.763794 MiB name: PDU_Pool 00:05:52.149 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:52.149 size: 0.026123 MiB name: Session_Pool 00:05:52.149 end mempools------- 00:05:52.149 6 memzones totaling size 4.142822 MiB 00:05:52.149 size: 1.000366 MiB name: RG_ring_0_521847 00:05:52.149 size: 1.000366 MiB name: RG_ring_1_521847 00:05:52.149 size: 1.000366 MiB name: RG_ring_4_521847 00:05:52.149 size: 1.000366 MiB name: RG_ring_5_521847 00:05:52.149 size: 0.125366 MiB name: RG_ring_2_521847 00:05:52.149 size: 0.015991 MiB name: RG_ring_3_521847 00:05:52.149 end memzones------- 00:05:52.149 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:52.149 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:05:52.149 list of free elements. size: 18.514832 MiB 00:05:52.149 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:52.149 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:52.149 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:52.149 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:52.149 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:52.149 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:52.149 element at address: 0x200019600000 with size: 0.999329 MiB 00:05:52.149 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:52.149 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:52.149 element at address: 0x200018e00000 with size: 0.959900 MiB 00:05:52.149 element at address: 0x200019900040 with size: 0.937256 MiB 00:05:52.149 element at address: 0x200000200000 with size: 0.840942 MiB 00:05:52.149 element at address: 0x20001b000000 with size: 0.583191 MiB 00:05:52.149 element at address: 0x200019200000 with size: 0.491150 MiB 00:05:52.149 element at address: 0x200019a00000 with size: 0.485657 MiB 00:05:52.149 element at address: 0x200013800000 with size: 0.470581 MiB 00:05:52.149 element at address: 0x200028400000 with size: 0.411072 MiB 00:05:52.149 element at address: 0x200003a00000 with size: 0.356140 MiB 00:05:52.149 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:05:52.149 list of standard malloc elements. size: 199.220764 MiB 00:05:52.149 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:52.149 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:52.149 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:52.149 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:52.149 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:52.149 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:52.149 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:52.149 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:52.149 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:05:52.149 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:05:52.149 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:05:52.149 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:05:52.149 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:05:52.149 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:52.149 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:52.149 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:52.149 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:52.149 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:52.149 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:52.149 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:52.149 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:05:52.149 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:05:52.149 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:05:52.149 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:05:52.149 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:05:52.149 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:05:52.149 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:52.149 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:52.149 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:52.149 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:52.149 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:05:52.149 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:05:52.149 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:05:52.149 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:05:52.149 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:05:52.149 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:05:52.149 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:05:52.149 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:05:52.150 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:52.150 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:52.150 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:52.150 list of memzone associated elements. size: 602.264404 MiB 00:05:52.150 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:52.150 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:52.150 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:52.150 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:52.150 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:52.150 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_521847_0 00:05:52.150 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:52.150 associated memzone info: size: 48.002930 MiB name: MP_evtpool_521847_0 00:05:52.150 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:52.150 associated memzone info: size: 48.002930 MiB name: MP_msgpool_521847_0 00:05:52.150 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:52.150 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:52.150 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:52.150 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:52.150 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:52.150 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_521847 00:05:52.150 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:52.150 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_521847 00:05:52.150 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:52.150 associated memzone info: size: 1.007996 MiB name: MP_evtpool_521847 00:05:52.150 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:52.150 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:52.150 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:52.150 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:52.150 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:52.150 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:52.150 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:52.150 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:52.150 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:52.150 associated memzone info: size: 1.000366 MiB name: RG_ring_0_521847 00:05:52.150 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:52.150 associated memzone info: size: 1.000366 MiB name: RG_ring_1_521847 00:05:52.150 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:52.150 associated memzone info: size: 1.000366 MiB name: RG_ring_4_521847 00:05:52.150 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:52.150 associated memzone info: size: 1.000366 MiB name: RG_ring_5_521847 00:05:52.150 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:52.150 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_521847 00:05:52.150 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:05:52.150 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:52.150 element at address: 0x200013878780 with size: 0.500549 MiB 00:05:52.150 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:52.150 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:05:52.150 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:52.150 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:52.150 associated memzone info: size: 0.125366 MiB name: RG_ring_2_521847 00:05:52.150 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:05:52.150 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:52.150 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:05:52.150 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:52.150 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:52.150 associated memzone info: size: 0.015991 MiB name: RG_ring_3_521847 00:05:52.150 element at address: 0x20002846f540 with size: 0.002502 MiB 00:05:52.150 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:52.150 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:05:52.150 associated memzone info: size: 0.000183 MiB name: MP_msgpool_521847 00:05:52.150 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:52.150 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_521847 00:05:52.150 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:05:52.150 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:52.150 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:52.150 16:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 521847 00:05:52.150 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 521847 ']' 00:05:52.150 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 521847 00:05:52.150 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:52.150 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.150 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 521847 00:05:52.150 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.150 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.150 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 521847' 00:05:52.150 killing process with pid 521847 00:05:52.150 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 521847 00:05:52.150 16:11:11 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 521847 00:05:54.681 00:05:54.681 real 0m4.094s 00:05:54.681 user 0m4.112s 00:05:54.681 sys 0m0.622s 00:05:54.681 16:11:14 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.681 16:11:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:54.681 ************************************ 00:05:54.681 END TEST dpdk_mem_utility 00:05:54.681 ************************************ 00:05:54.681 16:11:14 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:54.681 16:11:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.681 16:11:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.681 16:11:14 -- common/autotest_common.sh@10 -- # set +x 00:05:54.681 ************************************ 00:05:54.681 START TEST event 00:05:54.681 ************************************ 00:05:54.681 16:11:14 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:54.681 * Looking for test storage... 00:05:54.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:54.681 16:11:14 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:54.681 16:11:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:54.681 16:11:14 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:54.681 16:11:14 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:54.681 16:11:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.681 16:11:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.940 ************************************ 00:05:54.940 START TEST event_perf 00:05:54.940 ************************************ 00:05:54.940 16:11:14 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:54.940 Running I/O for 1 seconds...[2024-07-26 16:11:14.492331] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:54.940 [2024-07-26 16:11:14.492489] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522322 ] 00:05:54.940 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.940 [2024-07-26 16:11:14.618749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.198 [2024-07-26 16:11:14.884165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.198 [2024-07-26 16:11:14.884222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.198 [2024-07-26 16:11:14.884280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.198 [2024-07-26 16:11:14.884290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.570 Running I/O for 1 seconds... 00:05:56.570 lcore 0: 194047 00:05:56.570 lcore 1: 194046 00:05:56.570 lcore 2: 194045 00:05:56.570 lcore 3: 194047 00:05:56.855 done. 00:05:56.856 00:05:56.856 real 0m1.898s 00:05:56.856 user 0m4.714s 00:05:56.856 sys 0m0.168s 00:05:56.856 16:11:16 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.856 16:11:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.856 ************************************ 00:05:56.856 END TEST event_perf 00:05:56.856 ************************************ 00:05:56.856 16:11:16 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:56.856 16:11:16 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:56.856 16:11:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.856 16:11:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.856 ************************************ 00:05:56.856 START TEST event_reactor 00:05:56.856 ************************************ 00:05:56.856 16:11:16 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:56.856 [2024-07-26 16:11:16.433213] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:56.856 [2024-07-26 16:11:16.433346] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522602 ] 00:05:56.856 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.856 [2024-07-26 16:11:16.577157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.132 [2024-07-26 16:11:16.837970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.033 test_start 00:05:59.033 oneshot 00:05:59.033 tick 100 00:05:59.033 tick 100 00:05:59.033 tick 250 00:05:59.033 tick 100 00:05:59.033 tick 100 00:05:59.033 tick 100 00:05:59.033 tick 250 00:05:59.033 tick 500 00:05:59.033 tick 100 00:05:59.033 tick 100 00:05:59.033 tick 250 00:05:59.033 tick 100 00:05:59.033 tick 100 00:05:59.033 test_end 00:05:59.033 00:05:59.033 real 0m1.904s 00:05:59.033 user 0m1.729s 00:05:59.033 sys 0m0.165s 00:05:59.033 16:11:18 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.033 16:11:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:59.033 ************************************ 00:05:59.033 END TEST event_reactor 00:05:59.033 ************************************ 00:05:59.033 16:11:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.033 16:11:18 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:59.033 16:11:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.033 16:11:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.033 ************************************ 00:05:59.033 START TEST event_reactor_perf 00:05:59.033 ************************************ 00:05:59.033 16:11:18 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.033 [2024-07-26 16:11:18.386106] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:59.033 [2024-07-26 16:11:18.386222] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522886 ] 00:05:59.033 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.033 [2024-07-26 16:11:18.515344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.033 [2024-07-26 16:11:18.777602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.933 test_start 00:06:00.933 test_end 00:06:00.933 Performance: 261886 events per second 00:06:00.933 00:06:00.933 real 0m1.925s 00:06:00.933 user 0m1.752s 00:06:00.933 sys 0m0.162s 00:06:00.933 16:11:20 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.933 16:11:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.933 ************************************ 00:06:00.933 END TEST event_reactor_perf 00:06:00.933 ************************************ 00:06:00.933 16:11:20 event -- event/event.sh@49 -- # uname -s 00:06:00.933 16:11:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:00.933 16:11:20 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:00.933 16:11:20 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.933 16:11:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.933 16:11:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.933 ************************************ 00:06:00.933 START TEST event_scheduler 00:06:00.933 ************************************ 00:06:00.933 16:11:20 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:00.933 * Looking for test storage... 00:06:00.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:00.933 16:11:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:00.933 16:11:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=523136 00:06:00.933 16:11:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:00.933 16:11:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.933 16:11:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 523136 00:06:00.933 16:11:20 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 523136 ']' 00:06:00.933 16:11:20 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.933 16:11:20 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.933 16:11:20 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.933 16:11:20 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.933 16:11:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.933 [2024-07-26 16:11:20.457444] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:00.933 [2024-07-26 16:11:20.457580] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523136 ] 00:06:00.933 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.933 [2024-07-26 16:11:20.593578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.192 [2024-07-26 16:11:20.862420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.192 [2024-07-26 16:11:20.862478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.192 [2024-07-26 16:11:20.862530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.192 [2024-07-26 16:11:20.862537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.758 16:11:21 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.758 16:11:21 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:01.758 16:11:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:01.758 16:11:21 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.758 16:11:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:01.758 [2024-07-26 16:11:21.413297] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:01.758 [2024-07-26 16:11:21.413369] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:01.758 [2024-07-26 16:11:21.413426] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:01.758 [2024-07-26 16:11:21.413450] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:01.758 [2024-07-26 16:11:21.413468] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:01.758 16:11:21 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.758 16:11:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:01.758 16:11:21 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.758 16:11:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.016 [2024-07-26 16:11:21.735025] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:02.016 16:11:21 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.016 16:11:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:02.016 16:11:21 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.016 16:11:21 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.016 16:11:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.016 ************************************ 00:06:02.016 START TEST scheduler_create_thread 00:06:02.016 ************************************ 00:06:02.016 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:02.016 16:11:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:02.016 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.016 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.016 2 00:06:02.016 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.016 16:11:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:02.016 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.016 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.275 3 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.275 4 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.275 5 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.275 6 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.275 7 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.275 8 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.275 9 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.275 10 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.275 00:06:02.275 real 0m0.110s 00:06:02.275 user 0m0.014s 00:06:02.275 sys 0m0.003s 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.275 16:11:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.275 ************************************ 00:06:02.275 END TEST scheduler_create_thread 00:06:02.276 ************************************ 00:06:02.276 16:11:21 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:02.276 16:11:21 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 523136 00:06:02.276 16:11:21 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 523136 ']' 00:06:02.276 16:11:21 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 523136 00:06:02.276 16:11:21 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:02.276 16:11:21 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.276 16:11:21 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 523136 00:06:02.276 16:11:21 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:02.276 16:11:21 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:02.276 16:11:21 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 523136' 00:06:02.276 killing process with pid 523136 00:06:02.276 16:11:21 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 523136 00:06:02.276 16:11:21 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 523136 00:06:02.842 [2024-07-26 16:11:22.357956] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:03.778 00:06:03.778 real 0m3.132s 00:06:03.778 user 0m4.952s 00:06:03.778 sys 0m0.494s 00:06:03.778 16:11:23 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.778 16:11:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.778 ************************************ 00:06:03.778 END TEST event_scheduler 00:06:03.778 ************************************ 00:06:03.778 16:11:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:03.778 16:11:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:03.778 16:11:23 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.778 16:11:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.778 16:11:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.778 ************************************ 00:06:03.778 START TEST app_repeat 00:06:03.778 ************************************ 00:06:03.778 16:11:23 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:03.778 16:11:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.778 16:11:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.778 16:11:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:03.778 16:11:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.778 16:11:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:03.778 16:11:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:03.778 16:11:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:03.778 16:11:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=523528 00:06:03.778 16:11:23 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:03.778 16:11:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.778 16:11:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 523528' 00:06:03.778 Process app_repeat pid: 523528 00:06:03.778 16:11:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:03.778 16:11:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:03.778 spdk_app_start Round 0 00:06:03.778 16:11:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 523528 /var/tmp/spdk-nbd.sock 00:06:03.778 16:11:23 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 523528 ']' 00:06:03.778 16:11:23 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.778 16:11:23 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.778 16:11:23 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.778 16:11:23 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.778 16:11:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.036 [2024-07-26 16:11:23.567028] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:04.036 [2024-07-26 16:11:23.567185] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523528 ] 00:06:04.036 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.036 [2024-07-26 16:11:23.699380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.295 [2024-07-26 16:11:23.960776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.295 [2024-07-26 16:11:23.960782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.860 16:11:24 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.860 16:11:24 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:04.860 16:11:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.118 Malloc0 00:06:05.118 16:11:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.684 Malloc1 00:06:05.684 16:11:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.684 /dev/nbd0 00:06:05.684 16:11:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.685 16:11:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.685 16:11:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:05.685 16:11:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:05.685 16:11:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:05.685 16:11:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:05.685 16:11:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:05.685 16:11:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:05.685 16:11:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:05.685 16:11:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:05.685 16:11:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.685 1+0 records in 00:06:05.685 1+0 records out 00:06:05.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179474 s, 22.8 MB/s 00:06:05.685 16:11:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.943 16:11:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:05.943 16:11:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.943 16:11:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:05.943 16:11:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:05.943 16:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.943 16:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.943 16:11:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.943 /dev/nbd1 00:06:06.201 16:11:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.201 16:11:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.201 16:11:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:06.201 16:11:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:06.201 16:11:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:06.201 16:11:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:06.201 16:11:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:06.201 16:11:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:06.201 16:11:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:06.201 16:11:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:06.201 16:11:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.201 1+0 records in 00:06:06.201 1+0 records out 00:06:06.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025358 s, 16.2 MB/s 00:06:06.201 16:11:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.201 16:11:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:06.201 16:11:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.201 16:11:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:06.201 16:11:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:06.201 16:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.201 16:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.201 16:11:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.201 16:11:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.201 16:11:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.460 16:11:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.460 { 00:06:06.460 "nbd_device": "/dev/nbd0", 00:06:06.460 "bdev_name": "Malloc0" 00:06:06.460 }, 00:06:06.460 { 00:06:06.460 "nbd_device": "/dev/nbd1", 00:06:06.460 "bdev_name": "Malloc1" 00:06:06.460 } 00:06:06.460 ]' 00:06:06.460 16:11:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.460 { 00:06:06.460 "nbd_device": "/dev/nbd0", 00:06:06.460 "bdev_name": "Malloc0" 00:06:06.460 }, 00:06:06.460 { 00:06:06.460 "nbd_device": "/dev/nbd1", 00:06:06.460 "bdev_name": "Malloc1" 00:06:06.460 } 00:06:06.460 ]' 00:06:06.460 16:11:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.460 /dev/nbd1' 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.460 /dev/nbd1' 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.460 256+0 records in 00:06:06.460 256+0 records out 00:06:06.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477511 s, 220 MB/s 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.460 256+0 records in 00:06:06.460 256+0 records out 00:06:06.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252088 s, 41.6 MB/s 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.460 256+0 records in 00:06:06.460 256+0 records out 00:06:06.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288396 s, 36.4 MB/s 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.460 16:11:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.718 16:11:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.718 16:11:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.718 16:11:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.718 16:11:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.718 16:11:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.718 16:11:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.718 16:11:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.718 16:11:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.718 16:11:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.718 16:11:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.976 16:11:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.976 16:11:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.976 16:11:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.976 16:11:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.976 16:11:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.976 16:11:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.976 16:11:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.976 16:11:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.976 16:11:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.976 16:11:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.976 16:11:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.234 16:11:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.234 16:11:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.234 16:11:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.234 16:11:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.234 16:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.234 16:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.234 16:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:07.234 16:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.234 16:11:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.234 16:11:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.234 16:11:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.234 16:11:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.234 16:11:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.801 16:11:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.177 [2024-07-26 16:11:28.772804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.435 [2024-07-26 16:11:29.026640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.435 [2024-07-26 16:11:29.026643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.693 [2024-07-26 16:11:29.246983] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.693 [2024-07-26 16:11:29.247075] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.626 16:11:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.626 16:11:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:10.626 spdk_app_start Round 1 00:06:10.627 16:11:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 523528 /var/tmp/spdk-nbd.sock 00:06:10.627 16:11:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 523528 ']' 00:06:10.627 16:11:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.627 16:11:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.627 16:11:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.627 16:11:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.627 16:11:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.883 16:11:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.883 16:11:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:10.883 16:11:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.450 Malloc0 00:06:11.450 16:11:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.708 Malloc1 00:06:11.708 16:11:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.708 16:11:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.708 16:11:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.708 16:11:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.708 16:11:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.708 16:11:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.708 16:11:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.708 16:11:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.708 16:11:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.708 16:11:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.708 16:11:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.708 16:11:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.708 16:11:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:11.708 16:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.708 16:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.708 16:11:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.966 /dev/nbd0 00:06:11.966 16:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.966 16:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.966 16:11:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:11.966 16:11:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:11.966 16:11:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:11.966 16:11:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:11.966 16:11:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:11.966 16:11:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:11.966 16:11:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:11.966 16:11:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:11.966 16:11:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.966 1+0 records in 00:06:11.966 1+0 records out 00:06:11.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188178 s, 21.8 MB/s 00:06:11.966 16:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.966 16:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:11.966 16:11:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.966 16:11:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:11.966 16:11:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:11.966 16:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.966 16:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.966 16:11:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.224 /dev/nbd1 00:06:12.224 16:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.224 16:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.224 16:11:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:12.224 16:11:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:12.224 16:11:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:12.224 16:11:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:12.224 16:11:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:12.224 16:11:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:12.224 16:11:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:12.224 16:11:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:12.224 16:11:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.224 1+0 records in 00:06:12.224 1+0 records out 00:06:12.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268204 s, 15.3 MB/s 00:06:12.224 16:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.224 16:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:12.224 16:11:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.224 16:11:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:12.224 16:11:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:12.224 16:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.224 16:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.224 16:11:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.224 16:11:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.224 16:11:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.482 { 00:06:12.482 "nbd_device": "/dev/nbd0", 00:06:12.482 "bdev_name": "Malloc0" 00:06:12.482 }, 00:06:12.482 { 00:06:12.482 "nbd_device": "/dev/nbd1", 00:06:12.482 "bdev_name": "Malloc1" 00:06:12.482 } 00:06:12.482 ]' 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.482 { 00:06:12.482 "nbd_device": "/dev/nbd0", 00:06:12.482 "bdev_name": "Malloc0" 00:06:12.482 }, 00:06:12.482 { 00:06:12.482 "nbd_device": "/dev/nbd1", 00:06:12.482 "bdev_name": "Malloc1" 00:06:12.482 } 00:06:12.482 ]' 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.482 /dev/nbd1' 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.482 /dev/nbd1' 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.482 256+0 records in 00:06:12.482 256+0 records out 00:06:12.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488138 s, 215 MB/s 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.482 256+0 records in 00:06:12.482 256+0 records out 00:06:12.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244174 s, 42.9 MB/s 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.482 256+0 records in 00:06:12.482 256+0 records out 00:06:12.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291052 s, 36.0 MB/s 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.482 16:11:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.740 16:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.740 16:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.740 16:11:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.740 16:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.740 16:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.740 16:11:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.740 16:11:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.740 16:11:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.740 16:11:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.740 16:11:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.997 16:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.998 16:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.998 16:11:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.998 16:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.998 16:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.998 16:11:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.998 16:11:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.998 16:11:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.998 16:11:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.998 16:11:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.998 16:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.255 16:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.255 16:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.255 16:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.255 16:11:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.255 16:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.255 16:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.513 16:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.513 16:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.513 16:11:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.513 16:11:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.513 16:11:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.513 16:11:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.513 16:11:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.772 16:11:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.175 [2024-07-26 16:11:34.826045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.433 [2024-07-26 16:11:35.080153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.433 [2024-07-26 16:11:35.080154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.691 [2024-07-26 16:11:35.301024] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.691 [2024-07-26 16:11:35.301122] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.065 16:11:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.065 16:11:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:17.065 spdk_app_start Round 2 00:06:17.065 16:11:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 523528 /var/tmp/spdk-nbd.sock 00:06:17.065 16:11:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 523528 ']' 00:06:17.065 16:11:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.065 16:11:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.065 16:11:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.065 16:11:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.065 16:11:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.065 16:11:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.065 16:11:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:17.065 16:11:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.323 Malloc0 00:06:17.323 16:11:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.581 Malloc1 00:06:17.581 16:11:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.581 16:11:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.581 16:11:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.581 16:11:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.581 16:11:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.581 16:11:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.581 16:11:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.581 16:11:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.581 16:11:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.581 16:11:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.581 16:11:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.581 16:11:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.581 16:11:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:17.581 16:11:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.581 16:11:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.581 16:11:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.839 /dev/nbd0 00:06:17.839 16:11:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.839 16:11:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.839 16:11:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:17.839 16:11:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:17.839 16:11:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:17.839 16:11:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:17.839 16:11:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:17.839 16:11:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:17.839 16:11:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:17.839 16:11:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:17.839 16:11:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.839 1+0 records in 00:06:17.839 1+0 records out 00:06:17.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209742 s, 19.5 MB/s 00:06:17.839 16:11:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.839 16:11:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:17.839 16:11:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.839 16:11:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:17.839 16:11:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:17.839 16:11:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.839 16:11:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.839 16:11:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:18.099 /dev/nbd1 00:06:18.099 16:11:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.099 16:11:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.099 16:11:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:18.099 16:11:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:18.099 16:11:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:18.099 16:11:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:18.099 16:11:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:18.099 16:11:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:18.099 16:11:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:18.099 16:11:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:18.099 16:11:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.099 1+0 records in 00:06:18.099 1+0 records out 00:06:18.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272631 s, 15.0 MB/s 00:06:18.099 16:11:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.099 16:11:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:18.099 16:11:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.099 16:11:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:18.099 16:11:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:18.099 16:11:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.099 16:11:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.357 16:11:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.357 16:11:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.357 16:11:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.357 16:11:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.357 { 00:06:18.357 "nbd_device": "/dev/nbd0", 00:06:18.357 "bdev_name": "Malloc0" 00:06:18.357 }, 00:06:18.357 { 00:06:18.357 "nbd_device": "/dev/nbd1", 00:06:18.357 "bdev_name": "Malloc1" 00:06:18.357 } 00:06:18.357 ]' 00:06:18.357 16:11:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.357 { 00:06:18.357 "nbd_device": "/dev/nbd0", 00:06:18.357 "bdev_name": "Malloc0" 00:06:18.357 }, 00:06:18.357 { 00:06:18.357 "nbd_device": "/dev/nbd1", 00:06:18.357 "bdev_name": "Malloc1" 00:06:18.357 } 00:06:18.357 ]' 00:06:18.357 16:11:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.615 /dev/nbd1' 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.615 /dev/nbd1' 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.615 256+0 records in 00:06:18.615 256+0 records out 00:06:18.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00489796 s, 214 MB/s 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.615 256+0 records in 00:06:18.615 256+0 records out 00:06:18.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249316 s, 42.1 MB/s 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.615 256+0 records in 00:06:18.615 256+0 records out 00:06:18.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298687 s, 35.1 MB/s 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.615 16:11:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.873 16:11:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.873 16:11:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.873 16:11:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.873 16:11:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.873 16:11:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.873 16:11:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.873 16:11:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.873 16:11:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.873 16:11:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.873 16:11:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.131 16:11:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.131 16:11:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.131 16:11:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.131 16:11:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.131 16:11:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.131 16:11:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.131 16:11:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.131 16:11:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.131 16:11:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.131 16:11:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.131 16:11:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.389 16:11:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.389 16:11:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.389 16:11:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.389 16:11:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.389 16:11:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.389 16:11:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.389 16:11:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:19.389 16:11:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.389 16:11:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.389 16:11:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.389 16:11:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.389 16:11:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.389 16:11:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.957 16:11:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.332 [2024-07-26 16:11:40.897950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.591 [2024-07-26 16:11:41.152357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.591 [2024-07-26 16:11:41.152360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.849 [2024-07-26 16:11:41.368122] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.849 [2024-07-26 16:11:41.368208] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.783 16:11:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 523528 /var/tmp/spdk-nbd.sock 00:06:22.783 16:11:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 523528 ']' 00:06:22.783 16:11:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.783 16:11:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.783 16:11:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.783 16:11:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.783 16:11:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.042 16:11:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.042 16:11:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:23.042 16:11:42 event.app_repeat -- event/event.sh@39 -- # killprocess 523528 00:06:23.042 16:11:42 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 523528 ']' 00:06:23.042 16:11:42 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 523528 00:06:23.042 16:11:42 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:23.042 16:11:42 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.042 16:11:42 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 523528 00:06:23.042 16:11:42 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.042 16:11:42 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.042 16:11:42 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 523528' 00:06:23.042 killing process with pid 523528 00:06:23.042 16:11:42 event.app_repeat -- common/autotest_common.sh@969 -- # kill 523528 00:06:23.042 16:11:42 event.app_repeat -- common/autotest_common.sh@974 -- # wait 523528 00:06:24.422 spdk_app_start is called in Round 0. 00:06:24.422 Shutdown signal received, stop current app iteration 00:06:24.422 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:24.422 spdk_app_start is called in Round 1. 00:06:24.422 Shutdown signal received, stop current app iteration 00:06:24.422 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:24.422 spdk_app_start is called in Round 2. 00:06:24.422 Shutdown signal received, stop current app iteration 00:06:24.422 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:24.422 spdk_app_start is called in Round 3. 00:06:24.422 Shutdown signal received, stop current app iteration 00:06:24.422 16:11:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:24.422 16:11:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:24.422 00:06:24.422 real 0m20.516s 00:06:24.422 user 0m42.018s 00:06:24.422 sys 0m3.314s 00:06:24.422 16:11:44 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.422 16:11:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.422 ************************************ 00:06:24.422 END TEST app_repeat 00:06:24.422 ************************************ 00:06:24.422 16:11:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:24.422 16:11:44 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:24.422 16:11:44 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.422 16:11:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.422 16:11:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.422 ************************************ 00:06:24.422 START TEST cpu_locks 00:06:24.422 ************************************ 00:06:24.422 16:11:44 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:24.422 * Looking for test storage... 00:06:24.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:24.422 16:11:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:24.422 16:11:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:24.422 16:11:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:24.422 16:11:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:24.422 16:11:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.422 16:11:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.422 16:11:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.422 ************************************ 00:06:24.422 START TEST default_locks 00:06:24.422 ************************************ 00:06:24.422 16:11:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:24.422 16:11:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=526268 00:06:24.422 16:11:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.422 16:11:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 526268 00:06:24.422 16:11:44 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 526268 ']' 00:06:24.422 16:11:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.422 16:11:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.422 16:11:44 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.422 16:11:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.422 16:11:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.680 [2024-07-26 16:11:44.247633] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:24.680 [2024-07-26 16:11:44.247798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526268 ] 00:06:24.680 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.680 [2024-07-26 16:11:44.367870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.938 [2024-07-26 16:11:44.623583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.872 16:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.872 16:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:25.872 16:11:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 526268 00:06:25.872 16:11:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 526268 00:06:25.872 16:11:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.130 lslocks: write error 00:06:26.130 16:11:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 526268 00:06:26.130 16:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 526268 ']' 00:06:26.130 16:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 526268 00:06:26.130 16:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:26.130 16:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.130 16:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 526268 00:06:26.130 16:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.130 16:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.130 16:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 526268' 00:06:26.130 killing process with pid 526268 00:06:26.130 16:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 526268 00:06:26.130 16:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 526268 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 526268 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 526268 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 526268 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 526268 ']' 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (526268) - No such process 00:06:28.659 ERROR: process (pid: 526268) is no longer running 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.659 00:06:28.659 real 0m4.208s 00:06:28.659 user 0m4.213s 00:06:28.659 sys 0m0.749s 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.659 16:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.659 ************************************ 00:06:28.659 END TEST default_locks 00:06:28.659 ************************************ 00:06:28.659 16:11:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:28.659 16:11:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.659 16:11:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.659 16:11:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.659 ************************************ 00:06:28.659 START TEST default_locks_via_rpc 00:06:28.659 ************************************ 00:06:28.659 16:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:28.659 16:11:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=526708 00:06:28.659 16:11:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.659 16:11:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 526708 00:06:28.659 16:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 526708 ']' 00:06:28.659 16:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.659 16:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.659 16:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.659 16:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.659 16:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.918 [2024-07-26 16:11:48.514111] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:28.918 [2024-07-26 16:11:48.514248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526708 ] 00:06:28.918 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.918 [2024-07-26 16:11:48.646455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.177 [2024-07-26 16:11:48.905760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 526708 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 526708 00:06:30.113 16:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.372 16:11:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 526708 00:06:30.372 16:11:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 526708 ']' 00:06:30.372 16:11:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 526708 00:06:30.372 16:11:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:30.372 16:11:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.372 16:11:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 526708 00:06:30.372 16:11:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.372 16:11:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.372 16:11:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 526708' 00:06:30.372 killing process with pid 526708 00:06:30.372 16:11:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 526708 00:06:30.372 16:11:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 526708 00:06:32.969 00:06:32.969 real 0m4.220s 00:06:32.969 user 0m4.201s 00:06:32.969 sys 0m0.754s 00:06:32.969 16:11:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.969 16:11:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.969 ************************************ 00:06:32.969 END TEST default_locks_via_rpc 00:06:32.969 ************************************ 00:06:32.969 16:11:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:32.969 16:11:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.969 16:11:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.969 16:11:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.969 ************************************ 00:06:32.969 START TEST non_locking_app_on_locked_coremask 00:06:32.969 ************************************ 00:06:32.969 16:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:32.969 16:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=527264 00:06:32.969 16:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.969 16:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 527264 /var/tmp/spdk.sock 00:06:32.969 16:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 527264 ']' 00:06:32.969 16:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.969 16:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.969 16:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.969 16:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.969 16:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.227 [2024-07-26 16:11:52.784461] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:33.227 [2024-07-26 16:11:52.784602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527264 ] 00:06:33.227 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.227 [2024-07-26 16:11:52.912106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.487 [2024-07-26 16:11:53.160999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.423 16:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.423 16:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:34.423 16:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=527411 00:06:34.423 16:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:34.423 16:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 527411 /var/tmp/spdk2.sock 00:06:34.423 16:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 527411 ']' 00:06:34.423 16:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.423 16:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.423 16:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.423 16:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.423 16:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.423 [2024-07-26 16:11:54.150893] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:34.423 [2024-07-26 16:11:54.151098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527411 ] 00:06:34.681 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.681 [2024-07-26 16:11:54.329338] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.681 [2024-07-26 16:11:54.329414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.247 [2024-07-26 16:11:54.850653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.148 16:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.148 16:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:37.148 16:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 527264 00:06:37.148 16:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 527264 00:06:37.148 16:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.082 lslocks: write error 00:06:38.082 16:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 527264 00:06:38.082 16:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 527264 ']' 00:06:38.082 16:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 527264 00:06:38.082 16:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:38.082 16:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.082 16:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 527264 00:06:38.082 16:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.082 16:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.082 16:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 527264' 00:06:38.082 killing process with pid 527264 00:06:38.082 16:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 527264 00:06:38.082 16:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 527264 00:06:43.348 16:12:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 527411 00:06:43.349 16:12:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 527411 ']' 00:06:43.349 16:12:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 527411 00:06:43.349 16:12:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:43.349 16:12:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.349 16:12:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 527411 00:06:43.349 16:12:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.349 16:12:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.349 16:12:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 527411' 00:06:43.349 killing process with pid 527411 00:06:43.349 16:12:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 527411 00:06:43.349 16:12:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 527411 00:06:45.881 00:06:45.881 real 0m12.625s 00:06:45.881 user 0m12.991s 00:06:45.881 sys 0m1.566s 00:06:45.881 16:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.881 16:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.881 ************************************ 00:06:45.881 END TEST non_locking_app_on_locked_coremask 00:06:45.881 ************************************ 00:06:45.881 16:12:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:45.881 16:12:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.881 16:12:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.881 16:12:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.881 ************************************ 00:06:45.881 START TEST locking_app_on_unlocked_coremask 00:06:45.881 ************************************ 00:06:45.881 16:12:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:45.881 16:12:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=528884 00:06:45.881 16:12:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:45.881 16:12:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 528884 /var/tmp/spdk.sock 00:06:45.881 16:12:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 528884 ']' 00:06:45.881 16:12:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.881 16:12:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.881 16:12:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.881 16:12:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.881 16:12:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.881 [2024-07-26 16:12:05.458096] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:45.881 [2024-07-26 16:12:05.458257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528884 ] 00:06:45.881 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.881 [2024-07-26 16:12:05.582875] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.881 [2024-07-26 16:12:05.582937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.140 [2024-07-26 16:12:05.846751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.075 16:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.075 16:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:47.075 16:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=529136 00:06:47.075 16:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.075 16:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 529136 /var/tmp/spdk2.sock 00:06:47.075 16:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 529136 ']' 00:06:47.075 16:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.075 16:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.075 16:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.075 16:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.076 16:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.333 [2024-07-26 16:12:06.859169] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:47.334 [2024-07-26 16:12:06.859337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529136 ] 00:06:47.334 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.334 [2024-07-26 16:12:07.068182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.900 [2024-07-26 16:12:07.600631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.433 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.434 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:50.434 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 529136 00:06:50.434 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 529136 00:06:50.434 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.434 lslocks: write error 00:06:50.434 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 528884 00:06:50.434 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 528884 ']' 00:06:50.434 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 528884 00:06:50.434 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:50.434 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.434 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 528884 00:06:50.434 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.434 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.434 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 528884' 00:06:50.434 killing process with pid 528884 00:06:50.434 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 528884 00:06:50.434 16:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 528884 00:06:55.710 16:12:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 529136 00:06:55.710 16:12:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 529136 ']' 00:06:55.710 16:12:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 529136 00:06:55.710 16:12:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:55.710 16:12:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.710 16:12:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 529136 00:06:55.710 16:12:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.710 16:12:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.710 16:12:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 529136' 00:06:55.710 killing process with pid 529136 00:06:55.710 16:12:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 529136 00:06:55.710 16:12:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 529136 00:06:58.274 00:06:58.274 real 0m12.198s 00:06:58.274 user 0m12.565s 00:06:58.274 sys 0m1.480s 00:06:58.274 16:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.274 16:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.274 ************************************ 00:06:58.274 END TEST locking_app_on_unlocked_coremask 00:06:58.274 ************************************ 00:06:58.274 16:12:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:58.274 16:12:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.274 16:12:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.274 16:12:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.274 ************************************ 00:06:58.274 START TEST locking_app_on_locked_coremask 00:06:58.274 ************************************ 00:06:58.274 16:12:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:58.274 16:12:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=530890 00:06:58.274 16:12:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.274 16:12:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 530890 /var/tmp/spdk.sock 00:06:58.274 16:12:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 530890 ']' 00:06:58.274 16:12:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.274 16:12:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.274 16:12:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.274 16:12:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.274 16:12:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.274 [2024-07-26 16:12:17.701523] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:58.274 [2024-07-26 16:12:17.701682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530890 ] 00:06:58.274 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.274 [2024-07-26 16:12:17.822530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.534 [2024-07-26 16:12:18.071030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.470 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.470 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:59.470 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=531031 00:06:59.470 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:59.470 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 531031 /var/tmp/spdk2.sock 00:06:59.471 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:59.471 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 531031 /var/tmp/spdk2.sock 00:06:59.471 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:59.471 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.471 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:59.471 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.471 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 531031 /var/tmp/spdk2.sock 00:06:59.471 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 531031 ']' 00:06:59.471 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.471 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.471 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.471 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.471 16:12:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.471 [2024-07-26 16:12:19.067421] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:59.471 [2024-07-26 16:12:19.067580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531031 ] 00:06:59.471 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.730 [2024-07-26 16:12:19.262371] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 530890 has claimed it. 00:06:59.730 [2024-07-26 16:12:19.262463] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:00.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (531031) - No such process 00:07:00.298 ERROR: process (pid: 531031) is no longer running 00:07:00.298 16:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.298 16:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:00.298 16:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:00.298 16:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.298 16:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:00.298 16:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.298 16:12:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 530890 00:07:00.298 16:12:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 530890 00:07:00.298 16:12:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.558 lslocks: write error 00:07:00.558 16:12:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 530890 00:07:00.558 16:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 530890 ']' 00:07:00.558 16:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 530890 00:07:00.558 16:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:00.558 16:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.558 16:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 530890 00:07:00.558 16:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.558 16:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.558 16:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 530890' 00:07:00.558 killing process with pid 530890 00:07:00.558 16:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 530890 00:07:00.558 16:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 530890 00:07:03.094 00:07:03.094 real 0m5.051s 00:07:03.094 user 0m5.334s 00:07:03.094 sys 0m0.961s 00:07:03.094 16:12:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.094 16:12:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.094 ************************************ 00:07:03.094 END TEST locking_app_on_locked_coremask 00:07:03.094 ************************************ 00:07:03.094 16:12:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:03.094 16:12:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.094 16:12:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.094 16:12:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.094 ************************************ 00:07:03.094 START TEST locking_overlapped_coremask 00:07:03.094 ************************************ 00:07:03.095 16:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:03.095 16:12:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=531582 00:07:03.095 16:12:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:03.095 16:12:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 531582 /var/tmp/spdk.sock 00:07:03.095 16:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 531582 ']' 00:07:03.095 16:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.095 16:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.095 16:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.095 16:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.095 16:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.095 [2024-07-26 16:12:22.807274] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:03.095 [2024-07-26 16:12:22.807420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531582 ] 00:07:03.355 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.355 [2024-07-26 16:12:22.938731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.621 [2024-07-26 16:12:23.204047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.621 [2024-07-26 16:12:23.204280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.621 [2024-07-26 16:12:23.204285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=531734 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 531734 /var/tmp/spdk2.sock 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 531734 /var/tmp/spdk2.sock 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 531734 /var/tmp/spdk2.sock 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 531734 ']' 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.560 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.560 [2024-07-26 16:12:24.141035] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:04.560 [2024-07-26 16:12:24.141208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531734 ] 00:07:04.560 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.560 [2024-07-26 16:12:24.315841] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 531582 has claimed it. 00:07:04.560 [2024-07-26 16:12:24.315930] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:05.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (531734) - No such process 00:07:05.129 ERROR: process (pid: 531734) is no longer running 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 531582 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 531582 ']' 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 531582 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 531582 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 531582' 00:07:05.129 killing process with pid 531582 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 531582 00:07:05.129 16:12:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 531582 00:07:07.665 00:07:07.665 real 0m4.373s 00:07:07.665 user 0m11.284s 00:07:07.665 sys 0m0.777s 00:07:07.665 16:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.665 16:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.665 ************************************ 00:07:07.665 END TEST locking_overlapped_coremask 00:07:07.665 ************************************ 00:07:07.665 16:12:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:07.665 16:12:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.665 16:12:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.665 16:12:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.665 ************************************ 00:07:07.665 START TEST locking_overlapped_coremask_via_rpc 00:07:07.665 ************************************ 00:07:07.665 16:12:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:07.665 16:12:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=532157 00:07:07.665 16:12:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:07.665 16:12:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 532157 /var/tmp/spdk.sock 00:07:07.665 16:12:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 532157 ']' 00:07:07.665 16:12:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.665 16:12:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.665 16:12:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.665 16:12:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.665 16:12:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.665 [2024-07-26 16:12:27.227931] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:07.665 [2024-07-26 16:12:27.228073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid532157 ] 00:07:07.665 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.665 [2024-07-26 16:12:27.353742] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.665 [2024-07-26 16:12:27.353801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.925 [2024-07-26 16:12:27.616203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.925 [2024-07-26 16:12:27.616256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.925 [2024-07-26 16:12:27.616261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.862 16:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.862 16:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:08.862 16:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=532305 00:07:08.862 16:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 532305 /var/tmp/spdk2.sock 00:07:08.862 16:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:08.862 16:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 532305 ']' 00:07:08.862 16:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.862 16:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.862 16:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.862 16:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.862 16:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.862 [2024-07-26 16:12:28.588317] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:08.862 [2024-07-26 16:12:28.588479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid532305 ] 00:07:09.122 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.122 [2024-07-26 16:12:28.760459] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.122 [2024-07-26 16:12:28.760523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.688 [2024-07-26 16:12:29.231389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.688 [2024-07-26 16:12:29.231461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.688 [2024-07-26 16:12:29.231468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.590 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.590 [2024-07-26 16:12:31.347243] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 532157 has claimed it. 00:07:11.849 request: 00:07:11.849 { 00:07:11.849 "method": "framework_enable_cpumask_locks", 00:07:11.849 "req_id": 1 00:07:11.849 } 00:07:11.849 Got JSON-RPC error response 00:07:11.849 response: 00:07:11.849 { 00:07:11.849 "code": -32603, 00:07:11.849 "message": "Failed to claim CPU core: 2" 00:07:11.849 } 00:07:11.849 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:11.849 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:11.849 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.849 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:11.849 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.849 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 532157 /var/tmp/spdk.sock 00:07:11.849 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 532157 ']' 00:07:11.849 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.849 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.849 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.849 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.849 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.108 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.108 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:12.108 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 532305 /var/tmp/spdk2.sock 00:07:12.108 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 532305 ']' 00:07:12.108 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.108 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.108 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.108 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.108 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.368 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.368 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:12.368 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:12.368 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:12.368 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:12.368 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:12.368 00:07:12.368 real 0m4.788s 00:07:12.368 user 0m1.603s 00:07:12.368 sys 0m0.252s 00:07:12.368 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.368 16:12:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.368 ************************************ 00:07:12.368 END TEST locking_overlapped_coremask_via_rpc 00:07:12.368 ************************************ 00:07:12.368 16:12:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:12.368 16:12:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 532157 ]] 00:07:12.368 16:12:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 532157 00:07:12.368 16:12:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 532157 ']' 00:07:12.368 16:12:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 532157 00:07:12.368 16:12:31 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:12.368 16:12:31 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.368 16:12:31 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 532157 00:07:12.368 16:12:31 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.368 16:12:31 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.368 16:12:31 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 532157' 00:07:12.368 killing process with pid 532157 00:07:12.368 16:12:31 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 532157 00:07:12.368 16:12:31 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 532157 00:07:14.923 16:12:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 532305 ]] 00:07:14.923 16:12:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 532305 00:07:14.923 16:12:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 532305 ']' 00:07:14.923 16:12:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 532305 00:07:14.923 16:12:34 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:14.923 16:12:34 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.923 16:12:34 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 532305 00:07:14.923 16:12:34 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:14.923 16:12:34 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:14.923 16:12:34 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 532305' 00:07:14.923 killing process with pid 532305 00:07:14.923 16:12:34 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 532305 00:07:14.923 16:12:34 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 532305 00:07:16.829 16:12:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:16.829 16:12:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:16.829 16:12:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 532157 ]] 00:07:16.829 16:12:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 532157 00:07:16.829 16:12:36 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 532157 ']' 00:07:16.829 16:12:36 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 532157 00:07:16.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (532157) - No such process 00:07:16.829 16:12:36 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 532157 is not found' 00:07:16.829 Process with pid 532157 is not found 00:07:16.829 16:12:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 532305 ]] 00:07:16.829 16:12:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 532305 00:07:16.829 16:12:36 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 532305 ']' 00:07:16.829 16:12:36 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 532305 00:07:16.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (532305) - No such process 00:07:16.829 16:12:36 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 532305 is not found' 00:07:16.829 Process with pid 532305 is not found 00:07:16.829 16:12:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:16.829 00:07:16.829 real 0m52.437s 00:07:16.829 user 1m27.364s 00:07:16.829 sys 0m7.791s 00:07:16.829 16:12:36 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.829 16:12:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.829 ************************************ 00:07:16.829 END TEST cpu_locks 00:07:16.829 ************************************ 00:07:16.829 00:07:16.829 real 1m22.164s 00:07:16.829 user 2m22.665s 00:07:16.829 sys 0m12.332s 00:07:16.829 16:12:36 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.829 16:12:36 event -- common/autotest_common.sh@10 -- # set +x 00:07:16.829 ************************************ 00:07:16.829 END TEST event 00:07:16.829 ************************************ 00:07:16.830 16:12:36 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:16.830 16:12:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.830 16:12:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.830 16:12:36 -- common/autotest_common.sh@10 -- # set +x 00:07:16.830 ************************************ 00:07:16.830 START TEST thread 00:07:16.830 ************************************ 00:07:16.830 16:12:36 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:17.089 * Looking for test storage... 00:07:17.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:17.089 16:12:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:17.089 16:12:36 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:17.090 16:12:36 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.090 16:12:36 thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.090 ************************************ 00:07:17.090 START TEST thread_poller_perf 00:07:17.090 ************************************ 00:07:17.090 16:12:36 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:17.090 [2024-07-26 16:12:36.701725] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:17.090 [2024-07-26 16:12:36.701865] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid533336 ] 00:07:17.090 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.090 [2024-07-26 16:12:36.841019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.348 [2024-07-26 16:12:37.096086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.348 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:19.287 ====================================== 00:07:19.287 busy:2712231081 (cyc) 00:07:19.287 total_run_count: 282000 00:07:19.287 tsc_hz: 2700000000 (cyc) 00:07:19.287 ====================================== 00:07:19.287 poller_cost: 9617 (cyc), 3561 (nsec) 00:07:19.287 00:07:19.287 real 0m1.894s 00:07:19.287 user 0m1.719s 00:07:19.287 sys 0m0.165s 00:07:19.287 16:12:38 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.287 16:12:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:19.287 ************************************ 00:07:19.287 END TEST thread_poller_perf 00:07:19.287 ************************************ 00:07:19.287 16:12:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:19.287 16:12:38 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:19.287 16:12:38 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.287 16:12:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.287 ************************************ 00:07:19.287 START TEST thread_poller_perf 00:07:19.287 ************************************ 00:07:19.287 16:12:38 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:19.287 [2024-07-26 16:12:38.647262] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:19.287 [2024-07-26 16:12:38.647402] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid533625 ] 00:07:19.287 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.287 [2024-07-26 16:12:38.794532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.546 [2024-07-26 16:12:39.051112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.546 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:20.925 ====================================== 00:07:20.925 busy:2705371402 (cyc) 00:07:20.925 total_run_count: 3660000 00:07:20.925 tsc_hz: 2700000000 (cyc) 00:07:20.925 ====================================== 00:07:20.925 poller_cost: 739 (cyc), 273 (nsec) 00:07:20.925 00:07:20.925 real 0m1.894s 00:07:20.925 user 0m1.708s 00:07:20.925 sys 0m0.175s 00:07:20.925 16:12:40 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.925 16:12:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:20.925 ************************************ 00:07:20.925 END TEST thread_poller_perf 00:07:20.925 ************************************ 00:07:20.925 16:12:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:20.925 00:07:20.925 real 0m3.939s 00:07:20.925 user 0m3.475s 00:07:20.925 sys 0m0.456s 00:07:20.925 16:12:40 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.925 16:12:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.925 ************************************ 00:07:20.925 END TEST thread 00:07:20.925 ************************************ 00:07:20.925 16:12:40 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:20.925 16:12:40 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:20.925 16:12:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.925 16:12:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.925 16:12:40 -- common/autotest_common.sh@10 -- # set +x 00:07:20.925 ************************************ 00:07:20.925 START TEST app_cmdline 00:07:20.925 ************************************ 00:07:20.925 16:12:40 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:20.925 * Looking for test storage... 00:07:20.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:20.925 16:12:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:20.925 16:12:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=533945 00:07:20.925 16:12:40 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:20.925 16:12:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 533945 00:07:20.925 16:12:40 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 533945 ']' 00:07:20.925 16:12:40 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.925 16:12:40 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.925 16:12:40 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.926 16:12:40 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.926 16:12:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:21.184 [2024-07-26 16:12:40.714578] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:21.184 [2024-07-26 16:12:40.714739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid533945 ] 00:07:21.184 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.184 [2024-07-26 16:12:40.842386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.444 [2024-07-26 16:12:41.097177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.380 16:12:41 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.380 16:12:41 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:22.380 16:12:41 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:22.638 { 00:07:22.638 "version": "SPDK v24.09-pre git sha1 704257090", 00:07:22.638 "fields": { 00:07:22.638 "major": 24, 00:07:22.638 "minor": 9, 00:07:22.638 "patch": 0, 00:07:22.638 "suffix": "-pre", 00:07:22.638 "commit": "704257090" 00:07:22.638 } 00:07:22.638 } 00:07:22.638 16:12:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:22.638 16:12:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:22.638 16:12:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:22.638 16:12:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:22.638 16:12:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:22.638 16:12:42 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.638 16:12:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:22.638 16:12:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:22.638 16:12:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:22.638 16:12:42 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.638 16:12:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:22.638 16:12:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:22.639 16:12:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:22.639 16:12:42 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:22.639 16:12:42 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:22.639 16:12:42 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.639 16:12:42 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.639 16:12:42 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.639 16:12:42 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.639 16:12:42 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.639 16:12:42 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.639 16:12:42 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.639 16:12:42 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:22.639 16:12:42 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:22.897 request: 00:07:22.897 { 00:07:22.897 "method": "env_dpdk_get_mem_stats", 00:07:22.897 "req_id": 1 00:07:22.897 } 00:07:22.897 Got JSON-RPC error response 00:07:22.897 response: 00:07:22.897 { 00:07:22.897 "code": -32601, 00:07:22.897 "message": "Method not found" 00:07:22.897 } 00:07:22.897 16:12:42 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:22.897 16:12:42 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.897 16:12:42 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:22.897 16:12:42 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.897 16:12:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 533945 00:07:22.897 16:12:42 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 533945 ']' 00:07:22.897 16:12:42 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 533945 00:07:22.897 16:12:42 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:22.897 16:12:42 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.897 16:12:42 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 533945 00:07:22.897 16:12:42 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.897 16:12:42 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.897 16:12:42 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 533945' 00:07:22.897 killing process with pid 533945 00:07:22.897 16:12:42 app_cmdline -- common/autotest_common.sh@969 -- # kill 533945 00:07:22.897 16:12:42 app_cmdline -- common/autotest_common.sh@974 -- # wait 533945 00:07:25.431 00:07:25.431 real 0m4.480s 00:07:25.431 user 0m4.935s 00:07:25.431 sys 0m0.649s 00:07:25.431 16:12:45 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.431 16:12:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.431 ************************************ 00:07:25.431 END TEST app_cmdline 00:07:25.431 ************************************ 00:07:25.431 16:12:45 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:25.431 16:12:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.431 16:12:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.431 16:12:45 -- common/autotest_common.sh@10 -- # set +x 00:07:25.431 ************************************ 00:07:25.431 START TEST version 00:07:25.431 ************************************ 00:07:25.431 16:12:45 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:25.431 * Looking for test storage... 00:07:25.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:25.432 16:12:45 version -- app/version.sh@17 -- # get_header_version major 00:07:25.432 16:12:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.432 16:12:45 version -- app/version.sh@14 -- # cut -f2 00:07:25.432 16:12:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.432 16:12:45 version -- app/version.sh@17 -- # major=24 00:07:25.432 16:12:45 version -- app/version.sh@18 -- # get_header_version minor 00:07:25.432 16:12:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.432 16:12:45 version -- app/version.sh@14 -- # cut -f2 00:07:25.432 16:12:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.432 16:12:45 version -- app/version.sh@18 -- # minor=9 00:07:25.432 16:12:45 version -- app/version.sh@19 -- # get_header_version patch 00:07:25.432 16:12:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.432 16:12:45 version -- app/version.sh@14 -- # cut -f2 00:07:25.432 16:12:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.432 16:12:45 version -- app/version.sh@19 -- # patch=0 00:07:25.432 16:12:45 version -- app/version.sh@20 -- # get_header_version suffix 00:07:25.432 16:12:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.432 16:12:45 version -- app/version.sh@14 -- # cut -f2 00:07:25.432 16:12:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.432 16:12:45 version -- app/version.sh@20 -- # suffix=-pre 00:07:25.432 16:12:45 version -- app/version.sh@22 -- # version=24.9 00:07:25.432 16:12:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:25.432 16:12:45 version -- app/version.sh@28 -- # version=24.9rc0 00:07:25.432 16:12:45 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:25.432 16:12:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:25.690 16:12:45 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:25.691 16:12:45 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:25.691 00:07:25.691 real 0m0.100s 00:07:25.691 user 0m0.054s 00:07:25.691 sys 0m0.067s 00:07:25.691 16:12:45 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.691 16:12:45 version -- common/autotest_common.sh@10 -- # set +x 00:07:25.691 ************************************ 00:07:25.691 END TEST version 00:07:25.691 ************************************ 00:07:25.691 16:12:45 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:25.691 16:12:45 -- spdk/autotest.sh@202 -- # uname -s 00:07:25.691 16:12:45 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:25.691 16:12:45 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:25.691 16:12:45 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:25.691 16:12:45 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:25.691 16:12:45 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:25.691 16:12:45 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:25.691 16:12:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.691 16:12:45 -- common/autotest_common.sh@10 -- # set +x 00:07:25.691 16:12:45 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:25.691 16:12:45 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:25.691 16:12:45 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:25.691 16:12:45 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:25.691 16:12:45 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:25.691 16:12:45 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:25.691 16:12:45 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:25.691 16:12:45 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:25.691 16:12:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.691 16:12:45 -- common/autotest_common.sh@10 -- # set +x 00:07:25.691 ************************************ 00:07:25.691 START TEST nvmf_tcp 00:07:25.691 ************************************ 00:07:25.691 16:12:45 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:25.691 * Looking for test storage... 00:07:25.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:25.691 16:12:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:25.691 16:12:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:25.691 16:12:45 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:25.691 16:12:45 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:25.691 16:12:45 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.691 16:12:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:25.691 ************************************ 00:07:25.691 START TEST nvmf_target_core 00:07:25.691 ************************************ 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:25.691 * Looking for test storage... 00:07:25.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:25.691 ************************************ 00:07:25.691 START TEST nvmf_abort 00:07:25.691 ************************************ 00:07:25.691 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:25.952 * Looking for test storage... 00:07:25.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:25.952 16:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:27.858 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:27.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:27.858 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.858 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:27.859 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:27.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:07:27.859 00:07:27.859 --- 10.0.0.2 ping statistics --- 00:07:27.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.859 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:27.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:07:27.859 00:07:27.859 --- 10.0.0.1 ping statistics --- 00:07:27.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.859 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=536262 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 536262 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 536262 ']' 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.859 16:12:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.859 [2024-07-26 16:12:47.547386] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:27.859 [2024-07-26 16:12:47.547535] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.118 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.118 [2024-07-26 16:12:47.678755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.378 [2024-07-26 16:12:47.937412] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.378 [2024-07-26 16:12:47.937497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.378 [2024-07-26 16:12:47.937532] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.378 [2024-07-26 16:12:47.937554] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.378 [2024-07-26 16:12:47.937576] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.378 [2024-07-26 16:12:47.937714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.378 [2024-07-26 16:12:47.937766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.378 [2024-07-26 16:12:47.937772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.945 [2024-07-26 16:12:48.532215] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.945 Malloc0 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.945 Delay0 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:28.945 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.946 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.946 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.946 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:28.946 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.946 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.946 [2024-07-26 16:12:48.652930] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.946 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.946 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:28.946 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.946 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.946 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.946 16:12:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:29.206 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.206 [2024-07-26 16:12:48.810740] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:31.742 Initializing NVMe Controllers 00:07:31.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:31.742 controller IO queue size 128 less than required 00:07:31.742 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:31.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:31.742 Initialization complete. Launching workers. 00:07:31.742 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 25456 00:07:31.742 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 25517, failed to submit 66 00:07:31.742 success 25456, unsuccess 61, failed 0 00:07:31.742 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:31.742 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.742 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.742 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.742 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:31.742 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:31.742 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:31.742 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:31.742 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:31.742 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:31.742 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.742 16:12:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:31.742 rmmod nvme_tcp 00:07:31.742 rmmod nvme_fabrics 00:07:31.742 rmmod nvme_keyring 00:07:31.742 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.742 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:31.742 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:31.742 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 536262 ']' 00:07:31.742 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 536262 00:07:31.742 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 536262 ']' 00:07:31.742 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 536262 00:07:31.742 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:31.742 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.742 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 536262 00:07:31.742 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:31.742 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:31.742 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 536262' 00:07:31.742 killing process with pid 536262 00:07:31.742 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 536262 00:07:31.742 16:12:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 536262 00:07:32.681 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:32.681 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:32.681 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:32.681 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:32.681 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:32.681 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.681 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.681 16:12:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.219 00:07:35.219 real 0m9.066s 00:07:35.219 user 0m14.792s 00:07:35.219 sys 0m2.671s 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.219 ************************************ 00:07:35.219 END TEST nvmf_abort 00:07:35.219 ************************************ 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.219 ************************************ 00:07:35.219 START TEST nvmf_ns_hotplug_stress 00:07:35.219 ************************************ 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:35.219 * Looking for test storage... 00:07:35.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.219 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.220 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.220 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.220 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:35.220 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:35.220 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.220 16:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:37.156 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:37.156 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.156 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:37.157 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:37.157 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:37.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:07:37.157 00:07:37.157 --- 10.0.0.2 ping statistics --- 00:07:37.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.157 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:07:37.157 00:07:37.157 --- 10.0.0.1 ping statistics --- 00:07:37.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.157 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=538760 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 538760 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 538760 ']' 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.157 16:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.157 [2024-07-26 16:12:56.760879] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:37.157 [2024-07-26 16:12:56.761023] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.157 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.157 [2024-07-26 16:12:56.897025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.416 [2024-07-26 16:12:57.161586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.416 [2024-07-26 16:12:57.161670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.416 [2024-07-26 16:12:57.161703] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.416 [2024-07-26 16:12:57.161724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.416 [2024-07-26 16:12:57.161745] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.416 [2024-07-26 16:12:57.161874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.416 [2024-07-26 16:12:57.161925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.416 [2024-07-26 16:12:57.161932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.982 16:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.982 16:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:37.982 16:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:37.982 16:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.982 16:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:37.982 16:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.982 16:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:37.982 16:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:38.240 [2024-07-26 16:12:57.961978] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.240 16:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:38.498 16:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.756 [2024-07-26 16:12:58.453195] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.756 16:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:39.014 16:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:39.272 Malloc0 00:07:39.272 16:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:39.531 Delay0 00:07:39.531 16:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.789 16:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:40.046 NULL1 00:07:40.047 16:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:40.304 16:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=539184 00:07:40.304 16:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:40.304 16:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:40.304 16:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.304 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.562 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.819 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:40.819 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:41.076 true 00:07:41.076 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:41.076 16:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.333 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.591 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:41.591 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:41.848 true 00:07:41.849 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:41.849 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.106 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.364 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:42.364 16:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:42.621 true 00:07:42.621 16:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:42.621 16:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.557 Read completed with error (sct=0, sc=11) 00:07:43.557 16:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.074 16:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:44.074 16:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:44.074 true 00:07:44.074 16:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:44.074 16:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.010 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.010 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.268 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:45.268 16:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:45.526 true 00:07:45.526 16:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:45.526 16:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.784 16:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.043 16:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:46.043 16:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:46.301 true 00:07:46.301 16:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:46.301 16:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.237 16:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.497 16:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:47.498 16:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:47.498 true 00:07:47.756 16:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:47.756 16:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.015 16:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.274 16:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:48.274 16:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:48.533 true 00:07:48.533 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:48.533 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.793 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.053 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:49.053 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:49.053 true 00:07:49.312 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:49.312 16:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.250 16:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.508 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:50.508 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:50.766 true 00:07:50.766 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:50.766 16:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.732 16:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.732 16:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:51.732 16:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:51.990 true 00:07:51.990 16:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:51.990 16:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.248 16:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.505 16:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:52.505 16:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:52.763 true 00:07:52.763 16:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:52.763 16:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.707 16:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.970 16:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:53.970 16:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:54.228 true 00:07:54.228 16:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:54.228 16:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.485 16:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.742 16:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:54.742 16:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:54.999 true 00:07:54.999 16:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:54.999 16:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.934 16:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.192 16:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:56.192 16:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:56.450 true 00:07:56.450 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:56.450 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.708 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.966 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:56.966 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:57.224 true 00:07:57.224 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:57.224 16:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.163 16:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.421 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:58.421 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:58.680 true 00:07:58.680 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:58.680 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.939 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.197 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:59.197 16:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:59.455 true 00:07:59.455 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:07:59.455 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.389 16:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.645 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:00.645 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:00.902 true 00:08:00.903 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:08:00.903 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.161 16:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.419 16:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:01.419 16:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:01.676 true 00:08:01.676 16:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:08:01.676 16:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.613 16:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.871 16:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:02.871 16:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:02.871 true 00:08:02.871 16:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:08:02.871 16:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.437 16:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.437 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:03.437 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:03.693 true 00:08:03.693 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:08:03.694 16:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.628 16:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.885 16:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:04.886 16:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:05.143 true 00:08:05.143 16:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:08:05.143 16:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.401 16:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.659 16:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:05.659 16:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:05.976 true 00:08:05.976 16:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:08:05.976 16:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.253 16:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.511 16:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:06.511 16:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:06.770 true 00:08:06.770 16:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:08:06.770 16:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.709 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.709 16:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.967 16:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:07.967 16:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:08.225 true 00:08:08.225 16:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:08:08.225 16:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.483 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.741 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:08.741 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:08.999 true 00:08:08.999 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:08:08.999 16:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.935 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.193 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:10.193 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:10.193 true 00:08:10.193 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:08:10.193 16:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.452 16:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.710 Initializing NVMe Controllers 00:08:10.710 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:10.710 Controller IO queue size 128, less than required. 00:08:10.710 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:10.710 Controller IO queue size 128, less than required. 00:08:10.710 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:10.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:10.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:10.710 Initialization complete. Launching workers. 00:08:10.710 ======================================================== 00:08:10.710 Latency(us) 00:08:10.710 Device Information : IOPS MiB/s Average min max 00:08:10.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 736.65 0.36 86722.55 3527.38 1017143.67 00:08:10.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8339.44 4.07 15299.97 3807.31 389348.73 00:08:10.710 ======================================================== 00:08:10.710 Total : 9076.09 4.43 21096.94 3527.38 1017143.67 00:08:10.710 00:08:10.970 16:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:10.970 16:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:10.970 true 00:08:11.228 16:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539184 00:08:11.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (539184) - No such process 00:08:11.228 16:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 539184 00:08:11.228 16:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.228 16:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.796 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:11.796 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:11.796 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:11.796 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.797 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:11.797 null0 00:08:11.797 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:11.797 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.797 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:12.055 null1 00:08:12.055 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.055 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.055 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:12.312 null2 00:08:12.312 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.312 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.312 16:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:12.570 null3 00:08:12.570 16:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.570 16:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.570 16:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:12.828 null4 00:08:12.828 16:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.828 16:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.828 16:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:13.085 null5 00:08:13.086 16:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.086 16:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.086 16:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:13.343 null6 00:08:13.343 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.343 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.343 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:13.602 null7 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:13.602 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.603 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.603 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:13.603 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.603 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.603 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.603 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.603 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:13.603 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.603 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.603 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:13.603 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.603 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 543137 543138 543141 543143 543145 543148 543150 543152 00:08:13.603 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.603 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.861 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.861 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.861 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.861 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.861 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.861 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.861 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.861 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.119 16:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.377 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.377 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.377 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.377 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.377 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.377 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.377 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.377 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.639 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.640 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.640 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.640 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.640 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.640 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.640 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.640 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.640 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.897 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.898 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.898 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.155 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.155 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.155 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.155 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.155 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.414 16:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.672 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.672 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.672 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.672 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.672 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.672 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.672 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.672 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.931 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.190 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.190 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.190 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.190 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.190 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.190 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.190 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.190 16:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.448 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.706 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.706 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.706 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.706 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.706 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.706 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.706 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.706 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.964 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.222 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.222 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.222 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.222 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.222 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.222 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.222 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.222 16:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.479 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.479 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.479 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.479 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.479 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.479 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.479 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.479 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.479 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.480 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.480 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.480 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.480 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.480 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.480 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.480 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.480 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.480 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.480 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.480 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.480 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.480 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.480 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.480 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.738 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.738 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.738 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.738 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.738 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.738 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.738 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.738 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.997 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.255 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.255 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.255 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.255 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.255 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.255 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.255 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.255 16:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.513 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.514 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.514 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.514 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.514 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.514 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.514 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.514 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.514 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.771 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.771 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.771 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.771 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.771 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.771 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.771 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.771 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.031 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.031 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.289 rmmod nvme_tcp 00:08:19.289 rmmod nvme_fabrics 00:08:19.289 rmmod nvme_keyring 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 538760 ']' 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 538760 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 538760 ']' 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 538760 00:08:19.289 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:19.290 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.290 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 538760 00:08:19.290 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:19.290 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:19.290 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 538760' 00:08:19.290 killing process with pid 538760 00:08:19.290 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 538760 00:08:19.290 16:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 538760 00:08:20.736 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:20.736 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:20.736 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:20.736 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:20.736 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:20.736 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.736 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.736 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:22.644 00:08:22.644 real 0m47.754s 00:08:22.644 user 3m29.782s 00:08:22.644 sys 0m18.336s 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.644 ************************************ 00:08:22.644 END TEST nvmf_ns_hotplug_stress 00:08:22.644 ************************************ 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.644 ************************************ 00:08:22.644 START TEST nvmf_delete_subsystem 00:08:22.644 ************************************ 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:22.644 * Looking for test storage... 00:08:22.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.644 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:22.645 16:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:25.181 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:25.181 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:25.181 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:25.181 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:08:25.181 00:08:25.181 --- 10.0.0.2 ping statistics --- 00:08:25.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.181 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:08:25.181 00:08:25.181 --- 10.0.0.1 ping statistics --- 00:08:25.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.181 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.181 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=546072 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 546072 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 546072 ']' 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.182 16:13:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.182 [2024-07-26 16:13:44.626989] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:25.182 [2024-07-26 16:13:44.627192] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.182 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.182 [2024-07-26 16:13:44.761506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:25.442 [2024-07-26 16:13:45.016079] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.442 [2024-07-26 16:13:45.016162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.442 [2024-07-26 16:13:45.016215] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.442 [2024-07-26 16:13:45.016249] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.442 [2024-07-26 16:13:45.016284] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.442 [2024-07-26 16:13:45.016430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.442 [2024-07-26 16:13:45.016437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.010 [2024-07-26 16:13:45.566928] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.010 [2024-07-26 16:13:45.584446] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.010 NULL1 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.010 Delay0 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=546183 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:26.010 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:26.010 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.010 [2024-07-26 16:13:45.719019] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:27.914 16:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:27.914 16:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.914 16:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.172 Write completed with error (sct=0, sc=8) 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 starting I/O failed: -6 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 Write completed with error (sct=0, sc=8) 00:08:28.172 starting I/O failed: -6 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 starting I/O failed: -6 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 Write completed with error (sct=0, sc=8) 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 starting I/O failed: -6 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 starting I/O failed: -6 00:08:28.172 Write completed with error (sct=0, sc=8) 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 Write completed with error (sct=0, sc=8) 00:08:28.172 Write completed with error (sct=0, sc=8) 00:08:28.172 starting I/O failed: -6 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.172 Write completed with error (sct=0, sc=8) 00:08:28.172 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 [2024-07-26 16:13:47.825636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(5) to be set 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 [2024-07-26 16:13:47.827356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(5) to be set 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 starting I/O failed: -6 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 [2024-07-26 16:13:47.828642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(5) to be set 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Write completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 Read completed with error (sct=0, sc=8) 00:08:28.173 [2024-07-26 16:13:47.829265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(5) to be set 00:08:29.110 [2024-07-26 16:13:48.779986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(5) to be set 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 [2024-07-26 16:13:48.830892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(5) to be set 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 [2024-07-26 16:13:48.832448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(5) to be set 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 [2024-07-26 16:13:48.832852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(5) to be set 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 Read completed with error (sct=0, sc=8) 00:08:29.110 Write completed with error (sct=0, sc=8) 00:08:29.110 [2024-07-26 16:13:48.833470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016100 is same with the state(5) to be set 00:08:29.110 Initializing NVMe Controllers 00:08:29.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:29.110 Controller IO queue size 128, less than required. 00:08:29.110 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:29.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:29.110 Initialization complete. Launching workers. 00:08:29.110 ======================================================== 00:08:29.110 Latency(us) 00:08:29.110 Device Information : IOPS MiB/s Average min max 00:08:29.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 181.70 0.09 958305.68 1506.98 1016667.41 00:08:29.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 145.06 0.07 919594.88 1742.29 1015813.41 00:08:29.110 ======================================================== 00:08:29.110 Total : 326.76 0.16 941120.43 1506.98 1016667.41 00:08:29.110 00:08:29.110 [2024-07-26 16:13:48.838185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015980 (9): Bad file descriptor 00:08:29.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:29.110 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.110 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:29.110 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 546183 00:08:29.110 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 546183 00:08:29.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (546183) - No such process 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 546183 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 546183 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 546183 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.676 [2024-07-26 16:13:49.359829] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=546707 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546707 00:08:29.676 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.936 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.936 [2024-07-26 16:13:49.467285] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:30.195 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:30.195 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546707 00:08:30.195 16:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:30.763 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:30.763 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546707 00:08:30.763 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.333 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.333 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546707 00:08:31.333 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.902 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.902 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546707 00:08:31.902 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.162 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.162 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546707 00:08:32.162 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.731 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.731 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546707 00:08:32.731 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.991 Initializing NVMe Controllers 00:08:32.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:32.991 Controller IO queue size 128, less than required. 00:08:32.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:32.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:32.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:32.991 Initialization complete. Launching workers. 00:08:32.991 ======================================================== 00:08:32.991 Latency(us) 00:08:32.991 Device Information : IOPS MiB/s Average min max 00:08:32.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005421.33 1000317.67 1015243.45 00:08:32.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005582.77 1000285.79 1015735.12 00:08:32.991 ======================================================== 00:08:32.991 Total : 256.00 0.12 1005502.05 1000285.79 1015735.12 00:08:32.991 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546707 00:08:33.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (546707) - No such process 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 546707 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.251 rmmod nvme_tcp 00:08:33.251 rmmod nvme_fabrics 00:08:33.251 rmmod nvme_keyring 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 546072 ']' 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 546072 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 546072 ']' 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 546072 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 546072 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 546072' 00:08:33.251 killing process with pid 546072 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 546072 00:08:33.251 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 546072 00:08:34.631 16:13:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:34.631 16:13:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:34.631 16:13:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:34.631 16:13:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.631 16:13:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:34.631 16:13:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.632 16:13:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.632 16:13:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:37.173 00:08:37.173 real 0m14.021s 00:08:37.173 user 0m30.451s 00:08:37.173 sys 0m3.259s 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:37.173 ************************************ 00:08:37.173 END TEST nvmf_delete_subsystem 00:08:37.173 ************************************ 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.173 ************************************ 00:08:37.173 START TEST nvmf_host_management 00:08:37.173 ************************************ 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:37.173 * Looking for test storage... 00:08:37.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.173 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.174 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:37.174 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:37.174 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:37.174 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:39.112 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:39.112 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:39.112 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:39.112 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:39.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:08:39.112 00:08:39.112 --- 10.0.0.2 ping statistics --- 00:08:39.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.112 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:08:39.112 00:08:39.112 --- 10.0.0.1 ping statistics --- 00:08:39.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.112 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=549187 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 549187 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 549187 ']' 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.112 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.112 [2024-07-26 16:13:58.738099] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:39.112 [2024-07-26 16:13:58.738229] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.112 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.376 [2024-07-26 16:13:58.876095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.637 [2024-07-26 16:13:59.141186] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.637 [2024-07-26 16:13:59.141246] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.637 [2024-07-26 16:13:59.141271] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.637 [2024-07-26 16:13:59.141292] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.637 [2024-07-26 16:13:59.141318] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.637 [2024-07-26 16:13:59.141491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.637 [2024-07-26 16:13:59.141556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.637 [2024-07-26 16:13:59.141598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.637 [2024-07-26 16:13:59.141609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.204 [2024-07-26 16:13:59.728164] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.204 Malloc0 00:08:40.204 [2024-07-26 16:13:59.841506] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=549363 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 549363 /var/tmp/bdevperf.sock 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 549363 ']' 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:40.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:40.204 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.205 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:40.205 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.205 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:40.205 { 00:08:40.205 "params": { 00:08:40.205 "name": "Nvme$subsystem", 00:08:40.205 "trtype": "$TEST_TRANSPORT", 00:08:40.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.205 "adrfam": "ipv4", 00:08:40.205 "trsvcid": "$NVMF_PORT", 00:08:40.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.205 "hdgst": ${hdgst:-false}, 00:08:40.205 "ddgst": ${ddgst:-false} 00:08:40.205 }, 00:08:40.205 "method": "bdev_nvme_attach_controller" 00:08:40.205 } 00:08:40.205 EOF 00:08:40.205 )") 00:08:40.205 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:40.205 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:40.205 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:40.205 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:40.205 "params": { 00:08:40.205 "name": "Nvme0", 00:08:40.205 "trtype": "tcp", 00:08:40.205 "traddr": "10.0.0.2", 00:08:40.205 "adrfam": "ipv4", 00:08:40.205 "trsvcid": "4420", 00:08:40.205 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:40.205 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:40.205 "hdgst": false, 00:08:40.205 "ddgst": false 00:08:40.205 }, 00:08:40.205 "method": "bdev_nvme_attach_controller" 00:08:40.205 }' 00:08:40.205 [2024-07-26 16:13:59.948700] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:40.205 [2024-07-26 16:13:59.948845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid549363 ] 00:08:40.465 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.465 [2024-07-26 16:14:00.077390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.725 [2024-07-26 16:14:00.323019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.292 Running I/O for 10 seconds... 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=3 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:08:41.292 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:41.553 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:41.553 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:41.553 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:41.553 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:41.553 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.553 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.553 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.553 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:08:41.553 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:08:41.553 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:41.553 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:41.553 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:41.553 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:41.553 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.553 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.553 [2024-07-26 16:14:01.282550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.282665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.282701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.282732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.282758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.282777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.282812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.282849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.282884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.282907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.282925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.282943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.282979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.282997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-26 16:14:01.283496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same id:0 cdw10:00000000 cdw11:00000000 00:08:41.553 with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-26 16:14:01.283532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.553 with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:41.553 [2024-07-26 16:14:01.283568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.553 [2024-07-26 16:14:01.283585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-26 16:14:01.283603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same id:0 cdw10:00000000 cdw11:00000000 00:08:41.553 with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same [2024-07-26 16:14:01.283623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(5) to be set 00:08:41.553 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.553 [2024-07-26 16:14:01.283641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:41.553 [2024-07-26 16:14:01.283659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.553 [2024-07-26 16:14:01.283666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.283676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.283973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:41.554 [2024-07-26 16:14:01.284134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.284973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.284994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.285017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.285038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.285072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.285096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.285131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.285152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.285176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.285196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.285219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.285240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.285263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.285283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.285314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.285335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.285358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.285388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.285412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.285443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.285466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.285487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.285510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.285531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.554 [2024-07-26 16:14:01.285555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.554 [2024-07-26 16:14:01.285575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.285598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.285620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.285644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.285669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.285694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.285715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.285739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.285760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.285783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.285804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.285827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.285848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.285871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.285892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.285915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.285935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.285959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.285979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.555 [2024-07-26 16:14:01.286886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.286976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.286997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.287021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:41.555 [2024-07-26 16:14:01.287042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.287073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.287096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.287128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:41.555 [2024-07-26 16:14:01.287148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.555 [2024-07-26 16:14:01.287170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:08:41.555 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.555 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.555 [2024-07-26 16:14:01.287480] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:08:41.555 [2024-07-26 16:14:01.288795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:41.555 task offset: 49152 on job bdev=Nvme0n1 fails 00:08:41.555 00:08:41.555 Latency(us) 00:08:41.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.555 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:41.555 Job: Nvme0n1 ended in about 0.37 seconds with error 00:08:41.555 Verification LBA range: start 0x0 length 0x400 00:08:41.555 Nvme0n1 : 0.37 1037.73 64.86 172.95 0.00 51195.50 11602.30 44661.57 00:08:41.556 =================================================================================================================== 00:08:41.556 Total : 1037.73 64.86 172.95 0.00 51195.50 11602.30 44661.57 00:08:41.556 [2024-07-26 16:14:01.294305] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.556 [2024-07-26 16:14:01.294355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:08:41.556 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.556 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:41.556 [2024-07-26 16:14:01.311125] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.932 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 549363 00:08:42.932 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:42.932 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:42.932 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:42.932 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:42.932 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:42.932 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:42.932 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:42.932 { 00:08:42.932 "params": { 00:08:42.932 "name": "Nvme$subsystem", 00:08:42.932 "trtype": "$TEST_TRANSPORT", 00:08:42.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.932 "adrfam": "ipv4", 00:08:42.932 "trsvcid": "$NVMF_PORT", 00:08:42.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.932 "hdgst": ${hdgst:-false}, 00:08:42.932 "ddgst": ${ddgst:-false} 00:08:42.932 }, 00:08:42.932 "method": "bdev_nvme_attach_controller" 00:08:42.932 } 00:08:42.932 EOF 00:08:42.932 )") 00:08:42.932 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:42.932 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:42.932 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:42.932 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:42.932 "params": { 00:08:42.932 "name": "Nvme0", 00:08:42.932 "trtype": "tcp", 00:08:42.932 "traddr": "10.0.0.2", 00:08:42.932 "adrfam": "ipv4", 00:08:42.932 "trsvcid": "4420", 00:08:42.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:42.932 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:42.932 "hdgst": false, 00:08:42.932 "ddgst": false 00:08:42.932 }, 00:08:42.932 "method": "bdev_nvme_attach_controller" 00:08:42.932 }' 00:08:42.932 [2024-07-26 16:14:02.377054] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:42.932 [2024-07-26 16:14:02.377205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid549648 ] 00:08:42.932 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.932 [2024-07-26 16:14:02.502679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.191 [2024-07-26 16:14:02.748292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.450 Running I/O for 1 seconds... 00:08:44.825 00:08:44.825 Latency(us) 00:08:44.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.825 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:44.825 Verification LBA range: start 0x0 length 0x400 00:08:44.825 Nvme0n1 : 1.03 1299.00 81.19 0.00 0.00 48446.45 10340.12 41166.32 00:08:44.825 =================================================================================================================== 00:08:44.825 Total : 1299.00 81.19 0.00 0.00 48446.45 10340.12 41166.32 00:08:45.764 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:45.764 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:45.764 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:45.764 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:45.764 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:45.764 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.764 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:45.764 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.765 rmmod nvme_tcp 00:08:45.765 rmmod nvme_fabrics 00:08:45.765 rmmod nvme_keyring 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 549187 ']' 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 549187 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 549187 ']' 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 549187 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 549187 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 549187' 00:08:45.765 killing process with pid 549187 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 549187 00:08:45.765 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 549187 00:08:47.143 [2024-07-26 16:14:06.613292] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:47.143 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:47.143 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:47.143 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:47.143 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:47.143 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:47.143 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.143 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.143 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.051 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:49.051 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:49.051 00:08:49.051 real 0m12.364s 00:08:49.051 user 0m34.210s 00:08:49.051 sys 0m3.043s 00:08:49.051 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.051 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.051 ************************************ 00:08:49.051 END TEST nvmf_host_management 00:08:49.051 ************************************ 00:08:49.051 16:14:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:49.051 16:14:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:49.051 16:14:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.051 16:14:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.051 ************************************ 00:08:49.051 START TEST nvmf_lvol 00:08:49.051 ************************************ 00:08:49.051 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:49.309 * Looking for test storage... 00:08:49.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.309 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.309 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:49.310 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:51.216 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:51.216 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:51.216 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:51.216 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.216 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:51.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:08:51.217 00:08:51.217 --- 10.0.0.2 ping statistics --- 00:08:51.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.217 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:08:51.217 00:08:51.217 --- 10.0.0.1 ping statistics --- 00:08:51.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.217 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=552111 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 552111 00:08:51.217 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 552111 ']' 00:08:51.475 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.475 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.475 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.475 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.475 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.475 [2024-07-26 16:14:11.061655] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:51.475 [2024-07-26 16:14:11.061800] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.475 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.475 [2024-07-26 16:14:11.207607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:51.735 [2024-07-26 16:14:11.467893] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.735 [2024-07-26 16:14:11.467975] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.735 [2024-07-26 16:14:11.468009] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.735 [2024-07-26 16:14:11.468031] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.735 [2024-07-26 16:14:11.468052] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.735 [2024-07-26 16:14:11.468191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.735 [2024-07-26 16:14:11.468259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.735 [2024-07-26 16:14:11.468263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.309 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.309 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:52.309 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:52.309 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:52.309 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:52.309 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.309 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:52.576 [2024-07-26 16:14:12.300008] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.836 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.095 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:53.095 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.353 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:53.353 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:53.611 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:53.869 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4718974f-d717-486f-99ea-65e2df1d8aac 00:08:53.869 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4718974f-d717-486f-99ea-65e2df1d8aac lvol 20 00:08:54.128 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=213d1fab-a1c3-47cb-b2da-5ff6492db764 00:08:54.128 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:54.386 16:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 213d1fab-a1c3-47cb-b2da-5ff6492db764 00:08:54.644 16:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:54.903 [2024-07-26 16:14:14.572658] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.903 16:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:55.193 16:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=552558 00:08:55.193 16:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:55.193 16:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:55.193 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.130 16:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 213d1fab-a1c3-47cb-b2da-5ff6492db764 MY_SNAPSHOT 00:08:56.697 16:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a3656a03-6885-4717-98f6-9cded68ccf3a 00:08:56.697 16:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 213d1fab-a1c3-47cb-b2da-5ff6492db764 30 00:08:56.956 16:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a3656a03-6885-4717-98f6-9cded68ccf3a MY_CLONE 00:08:57.214 16:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4fff4a3c-261f-4779-8090-a415820557a3 00:08:57.214 16:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4fff4a3c-261f-4779-8090-a415820557a3 00:08:57.781 16:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 552558 00:09:05.906 Initializing NVMe Controllers 00:09:05.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:05.906 Controller IO queue size 128, less than required. 00:09:05.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:05.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:05.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:05.906 Initialization complete. Launching workers. 00:09:05.906 ======================================================== 00:09:05.906 Latency(us) 00:09:05.906 Device Information : IOPS MiB/s Average min max 00:09:05.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7903.30 30.87 16202.25 498.95 187557.24 00:09:05.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8196.20 32.02 15630.87 3358.86 146695.55 00:09:05.906 ======================================================== 00:09:05.906 Total : 16099.50 62.89 15911.36 498.95 187557.24 00:09:05.906 00:09:05.906 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:05.906 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 213d1fab-a1c3-47cb-b2da-5ff6492db764 00:09:06.472 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4718974f-d717-486f-99ea-65e2df1d8aac 00:09:06.472 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:06.472 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:06.472 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:06.472 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.472 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:06.472 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.472 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:06.472 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.472 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.472 rmmod nvme_tcp 00:09:06.732 rmmod nvme_fabrics 00:09:06.732 rmmod nvme_keyring 00:09:06.732 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.732 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:06.732 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:06.732 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 552111 ']' 00:09:06.732 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 552111 00:09:06.732 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 552111 ']' 00:09:06.732 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 552111 00:09:06.732 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:06.732 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:06.732 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 552111 00:09:06.732 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:06.732 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:06.732 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 552111' 00:09:06.732 killing process with pid 552111 00:09:06.732 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 552111 00:09:06.732 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 552111 00:09:08.109 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:08.109 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:08.109 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:08.109 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:08.109 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:08.109 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.109 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.109 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.644 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:10.644 00:09:10.644 real 0m21.058s 00:09:10.644 user 1m9.561s 00:09:10.644 sys 0m5.819s 00:09:10.644 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.644 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:10.644 ************************************ 00:09:10.644 END TEST nvmf_lvol 00:09:10.644 ************************************ 00:09:10.644 16:14:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:10.644 16:14:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:10.644 16:14:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.644 16:14:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:10.644 ************************************ 00:09:10.644 START TEST nvmf_lvs_grow 00:09:10.644 ************************************ 00:09:10.644 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:10.644 * Looking for test storage... 00:09:10.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.644 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.644 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:09:10.645 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:12.546 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.546 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:09:12.546 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:12.546 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:12.546 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:12.546 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:12.546 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:12.546 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:09:12.546 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:12.546 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:09:12.546 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:09:12.546 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:09:12.546 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:12.547 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:12.547 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:12.547 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:12.547 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.547 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:12.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:09:12.547 00:09:12.547 --- 10.0.0.2 ping statistics --- 00:09:12.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.547 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:09:12.547 00:09:12.547 --- 10.0.0.1 ping statistics --- 00:09:12.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.547 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=556077 00:09:12.547 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:12.548 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 556077 00:09:12.548 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 556077 ']' 00:09:12.548 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.548 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.548 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.548 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.548 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:12.548 [2024-07-26 16:14:32.214861] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:12.548 [2024-07-26 16:14:32.215010] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.548 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.807 [2024-07-26 16:14:32.352168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.065 [2024-07-26 16:14:32.609169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.065 [2024-07-26 16:14:32.609237] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.065 [2024-07-26 16:14:32.609263] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.065 [2024-07-26 16:14:32.609284] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.065 [2024-07-26 16:14:32.609303] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.065 [2024-07-26 16:14:32.609371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.631 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.631 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:13.631 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:13.631 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:13.631 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.631 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.631 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:13.888 [2024-07-26 16:14:33.507613] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.888 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:13.888 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:13.888 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.889 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.889 ************************************ 00:09:13.889 START TEST lvs_grow_clean 00:09:13.889 ************************************ 00:09:13.889 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:13.889 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:13.889 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:13.889 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:13.889 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:13.889 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:13.889 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:13.889 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:13.889 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:13.889 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.146 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:14.146 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:14.404 16:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4e930bec-0633-46ed-9d78-505e493f296a 00:09:14.404 16:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e930bec-0633-46ed-9d78-505e493f296a 00:09:14.404 16:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:14.664 16:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:14.664 16:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:14.664 16:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4e930bec-0633-46ed-9d78-505e493f296a lvol 150 00:09:14.924 16:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b6a52b2b-dd23-4076-8825-df7c266bdad5 00:09:14.924 16:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.924 16:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:15.184 [2024-07-26 16:14:34.859760] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:15.184 [2024-07-26 16:14:34.859894] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:15.184 true 00:09:15.184 16:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e930bec-0633-46ed-9d78-505e493f296a 00:09:15.184 16:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:15.442 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:15.442 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:15.701 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b6a52b2b-dd23-4076-8825-df7c266bdad5 00:09:15.959 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:16.219 [2024-07-26 16:14:35.955489] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.219 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:16.813 16:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=556526 00:09:16.813 16:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:16.813 16:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:16.813 16:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 556526 /var/tmp/bdevperf.sock 00:09:16.813 16:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 556526 ']' 00:09:16.813 16:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:16.813 16:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.813 16:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:16.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:16.813 16:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.813 16:14:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:16.813 [2024-07-26 16:14:36.335401] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:16.813 [2024-07-26 16:14:36.335541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid556526 ] 00:09:16.813 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.813 [2024-07-26 16:14:36.464523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.077 [2024-07-26 16:14:36.708847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.643 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.643 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:17.643 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:18.211 Nvme0n1 00:09:18.211 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:18.472 [ 00:09:18.472 { 00:09:18.472 "name": "Nvme0n1", 00:09:18.472 "aliases": [ 00:09:18.472 "b6a52b2b-dd23-4076-8825-df7c266bdad5" 00:09:18.472 ], 00:09:18.472 "product_name": "NVMe disk", 00:09:18.472 "block_size": 4096, 00:09:18.472 "num_blocks": 38912, 00:09:18.472 "uuid": "b6a52b2b-dd23-4076-8825-df7c266bdad5", 00:09:18.472 "assigned_rate_limits": { 00:09:18.472 "rw_ios_per_sec": 0, 00:09:18.472 "rw_mbytes_per_sec": 0, 00:09:18.472 "r_mbytes_per_sec": 0, 00:09:18.472 "w_mbytes_per_sec": 0 00:09:18.472 }, 00:09:18.472 "claimed": false, 00:09:18.472 "zoned": false, 00:09:18.472 "supported_io_types": { 00:09:18.472 "read": true, 00:09:18.472 "write": true, 00:09:18.472 "unmap": true, 00:09:18.472 "flush": true, 00:09:18.472 "reset": true, 00:09:18.472 "nvme_admin": true, 00:09:18.472 "nvme_io": true, 00:09:18.472 "nvme_io_md": false, 00:09:18.472 "write_zeroes": true, 00:09:18.472 "zcopy": false, 00:09:18.472 "get_zone_info": false, 00:09:18.472 "zone_management": false, 00:09:18.472 "zone_append": false, 00:09:18.472 "compare": true, 00:09:18.472 "compare_and_write": true, 00:09:18.472 "abort": true, 00:09:18.472 "seek_hole": false, 00:09:18.472 "seek_data": false, 00:09:18.472 "copy": true, 00:09:18.472 "nvme_iov_md": false 00:09:18.472 }, 00:09:18.472 "memory_domains": [ 00:09:18.472 { 00:09:18.472 "dma_device_id": "system", 00:09:18.472 "dma_device_type": 1 00:09:18.472 } 00:09:18.472 ], 00:09:18.472 "driver_specific": { 00:09:18.472 "nvme": [ 00:09:18.472 { 00:09:18.472 "trid": { 00:09:18.472 "trtype": "TCP", 00:09:18.472 "adrfam": "IPv4", 00:09:18.472 "traddr": "10.0.0.2", 00:09:18.472 "trsvcid": "4420", 00:09:18.472 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:18.472 }, 00:09:18.472 "ctrlr_data": { 00:09:18.472 "cntlid": 1, 00:09:18.472 "vendor_id": "0x8086", 00:09:18.472 "model_number": "SPDK bdev Controller", 00:09:18.472 "serial_number": "SPDK0", 00:09:18.472 "firmware_revision": "24.09", 00:09:18.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:18.472 "oacs": { 00:09:18.472 "security": 0, 00:09:18.472 "format": 0, 00:09:18.472 "firmware": 0, 00:09:18.472 "ns_manage": 0 00:09:18.472 }, 00:09:18.472 "multi_ctrlr": true, 00:09:18.472 "ana_reporting": false 00:09:18.472 }, 00:09:18.472 "vs": { 00:09:18.472 "nvme_version": "1.3" 00:09:18.472 }, 00:09:18.472 "ns_data": { 00:09:18.472 "id": 1, 00:09:18.472 "can_share": true 00:09:18.472 } 00:09:18.472 } 00:09:18.472 ], 00:09:18.472 "mp_policy": "active_passive" 00:09:18.472 } 00:09:18.472 } 00:09:18.472 ] 00:09:18.472 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=556797 00:09:18.472 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:18.472 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:18.472 Running I/O for 10 seconds... 00:09:19.411 Latency(us) 00:09:19.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.411 Nvme0n1 : 1.00 11703.00 45.71 0.00 0.00 0.00 0.00 0.00 00:09:19.411 =================================================================================================================== 00:09:19.411 Total : 11703.00 45.71 0.00 0.00 0.00 0.00 0.00 00:09:19.411 00:09:20.351 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4e930bec-0633-46ed-9d78-505e493f296a 00:09:20.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.610 Nvme0n1 : 2.00 11386.00 44.48 0.00 0.00 0.00 0.00 0.00 00:09:20.610 =================================================================================================================== 00:09:20.610 Total : 11386.00 44.48 0.00 0.00 0.00 0.00 0.00 00:09:20.610 00:09:20.610 true 00:09:20.610 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e930bec-0633-46ed-9d78-505e493f296a 00:09:20.610 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:20.869 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:20.869 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:20.869 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 556797 00:09:21.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.439 Nvme0n1 : 3.00 11335.00 44.28 0.00 0.00 0.00 0.00 0.00 00:09:21.439 =================================================================================================================== 00:09:21.439 Total : 11335.00 44.28 0.00 0.00 0.00 0.00 0.00 00:09:21.439 00:09:22.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.378 Nvme0n1 : 4.00 11312.25 44.19 0.00 0.00 0.00 0.00 0.00 00:09:22.378 =================================================================================================================== 00:09:22.378 Total : 11312.25 44.19 0.00 0.00 0.00 0.00 0.00 00:09:22.378 00:09:23.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.760 Nvme0n1 : 5.00 11325.20 44.24 0.00 0.00 0.00 0.00 0.00 00:09:23.760 =================================================================================================================== 00:09:23.760 Total : 11325.20 44.24 0.00 0.00 0.00 0.00 0.00 00:09:23.760 00:09:24.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.697 Nvme0n1 : 6.00 11311.67 44.19 0.00 0.00 0.00 0.00 0.00 00:09:24.697 =================================================================================================================== 00:09:24.697 Total : 11311.67 44.19 0.00 0.00 0.00 0.00 0.00 00:09:24.697 00:09:25.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.638 Nvme0n1 : 7.00 11322.14 44.23 0.00 0.00 0.00 0.00 0.00 00:09:25.638 =================================================================================================================== 00:09:25.638 Total : 11322.14 44.23 0.00 0.00 0.00 0.00 0.00 00:09:25.638 00:09:26.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.576 Nvme0n1 : 8.00 11344.50 44.31 0.00 0.00 0.00 0.00 0.00 00:09:26.576 =================================================================================================================== 00:09:26.576 Total : 11344.50 44.31 0.00 0.00 0.00 0.00 0.00 00:09:26.576 00:09:27.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.515 Nvme0n1 : 9.00 11354.56 44.35 0.00 0.00 0.00 0.00 0.00 00:09:27.515 =================================================================================================================== 00:09:27.515 Total : 11354.56 44.35 0.00 0.00 0.00 0.00 0.00 00:09:27.515 00:09:28.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.455 Nvme0n1 : 10.00 11356.00 44.36 0.00 0.00 0.00 0.00 0.00 00:09:28.455 =================================================================================================================== 00:09:28.455 Total : 11356.00 44.36 0.00 0.00 0.00 0.00 0.00 00:09:28.455 00:09:28.455 00:09:28.455 Latency(us) 00:09:28.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.455 Nvme0n1 : 10.01 11359.09 44.37 0.00 0.00 11261.83 5801.15 21165.70 00:09:28.455 =================================================================================================================== 00:09:28.455 Total : 11359.09 44.37 0.00 0.00 11261.83 5801.15 21165.70 00:09:28.455 0 00:09:28.455 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 556526 00:09:28.455 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 556526 ']' 00:09:28.455 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 556526 00:09:28.455 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:28.455 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.455 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 556526 00:09:28.455 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:28.455 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:28.455 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 556526' 00:09:28.455 killing process with pid 556526 00:09:28.455 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 556526 00:09:28.455 Received shutdown signal, test time was about 10.000000 seconds 00:09:28.455 00:09:28.455 Latency(us) 00:09:28.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.455 =================================================================================================================== 00:09:28.455 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:28.455 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 556526 00:09:29.834 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:29.834 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:30.092 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e930bec-0633-46ed-9d78-505e493f296a 00:09:30.092 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:30.351 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:30.351 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:30.351 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:30.609 [2024-07-26 16:14:50.246113] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:30.609 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e930bec-0633-46ed-9d78-505e493f296a 00:09:30.609 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:30.609 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e930bec-0633-46ed-9d78-505e493f296a 00:09:30.609 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.609 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.609 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.609 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.609 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.610 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.610 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.610 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:30.610 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e930bec-0633-46ed-9d78-505e493f296a 00:09:30.867 request: 00:09:30.867 { 00:09:30.867 "uuid": "4e930bec-0633-46ed-9d78-505e493f296a", 00:09:30.867 "method": "bdev_lvol_get_lvstores", 00:09:30.867 "req_id": 1 00:09:30.867 } 00:09:30.867 Got JSON-RPC error response 00:09:30.867 response: 00:09:30.867 { 00:09:30.867 "code": -19, 00:09:30.867 "message": "No such device" 00:09:30.867 } 00:09:30.867 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:30.867 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:30.867 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:30.867 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:30.867 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:31.125 aio_bdev 00:09:31.125 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b6a52b2b-dd23-4076-8825-df7c266bdad5 00:09:31.125 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=b6a52b2b-dd23-4076-8825-df7c266bdad5 00:09:31.125 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:31.125 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:31.125 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:31.125 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:31.125 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:31.384 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b6a52b2b-dd23-4076-8825-df7c266bdad5 -t 2000 00:09:31.642 [ 00:09:31.642 { 00:09:31.642 "name": "b6a52b2b-dd23-4076-8825-df7c266bdad5", 00:09:31.642 "aliases": [ 00:09:31.642 "lvs/lvol" 00:09:31.642 ], 00:09:31.642 "product_name": "Logical Volume", 00:09:31.642 "block_size": 4096, 00:09:31.642 "num_blocks": 38912, 00:09:31.642 "uuid": "b6a52b2b-dd23-4076-8825-df7c266bdad5", 00:09:31.642 "assigned_rate_limits": { 00:09:31.642 "rw_ios_per_sec": 0, 00:09:31.642 "rw_mbytes_per_sec": 0, 00:09:31.642 "r_mbytes_per_sec": 0, 00:09:31.642 "w_mbytes_per_sec": 0 00:09:31.642 }, 00:09:31.642 "claimed": false, 00:09:31.642 "zoned": false, 00:09:31.642 "supported_io_types": { 00:09:31.642 "read": true, 00:09:31.642 "write": true, 00:09:31.642 "unmap": true, 00:09:31.642 "flush": false, 00:09:31.642 "reset": true, 00:09:31.642 "nvme_admin": false, 00:09:31.642 "nvme_io": false, 00:09:31.642 "nvme_io_md": false, 00:09:31.642 "write_zeroes": true, 00:09:31.642 "zcopy": false, 00:09:31.642 "get_zone_info": false, 00:09:31.642 "zone_management": false, 00:09:31.642 "zone_append": false, 00:09:31.642 "compare": false, 00:09:31.642 "compare_and_write": false, 00:09:31.642 "abort": false, 00:09:31.642 "seek_hole": true, 00:09:31.642 "seek_data": true, 00:09:31.642 "copy": false, 00:09:31.642 "nvme_iov_md": false 00:09:31.642 }, 00:09:31.642 "driver_specific": { 00:09:31.642 "lvol": { 00:09:31.642 "lvol_store_uuid": "4e930bec-0633-46ed-9d78-505e493f296a", 00:09:31.642 "base_bdev": "aio_bdev", 00:09:31.642 "thin_provision": false, 00:09:31.642 "num_allocated_clusters": 38, 00:09:31.642 "snapshot": false, 00:09:31.642 "clone": false, 00:09:31.642 "esnap_clone": false 00:09:31.642 } 00:09:31.642 } 00:09:31.642 } 00:09:31.642 ] 00:09:31.901 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:31.901 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e930bec-0633-46ed-9d78-505e493f296a 00:09:31.901 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:31.901 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:32.159 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4e930bec-0633-46ed-9d78-505e493f296a 00:09:32.159 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:32.159 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:32.159 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b6a52b2b-dd23-4076-8825-df7c266bdad5 00:09:32.417 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4e930bec-0633-46ed-9d78-505e493f296a 00:09:33.066 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:33.066 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:33.323 00:09:33.323 real 0m19.251s 00:09:33.323 user 0m18.727s 00:09:33.323 sys 0m2.013s 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:33.323 ************************************ 00:09:33.323 END TEST lvs_grow_clean 00:09:33.323 ************************************ 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:33.323 ************************************ 00:09:33.323 START TEST lvs_grow_dirty 00:09:33.323 ************************************ 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:33.323 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:33.580 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:33.580 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:33.837 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8d45e063-b4e3-47de-9779-cee7eadb2b35 00:09:33.838 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d45e063-b4e3-47de-9779-cee7eadb2b35 00:09:33.838 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:34.097 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:34.097 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:34.097 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8d45e063-b4e3-47de-9779-cee7eadb2b35 lvol 150 00:09:34.356 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f30f41e5-3b54-434f-8f64-bf6c8c90f05c 00:09:34.356 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:34.356 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:34.616 [2024-07-26 16:14:54.166684] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:34.616 [2024-07-26 16:14:54.166800] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:34.616 true 00:09:34.616 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d45e063-b4e3-47de-9779-cee7eadb2b35 00:09:34.616 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:34.875 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:34.875 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:35.134 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f30f41e5-3b54-434f-8f64-bf6c8c90f05c 00:09:35.393 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:35.652 [2024-07-26 16:14:55.210151] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.652 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:35.911 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=558848 00:09:35.911 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:35.911 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:35.911 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 558848 /var/tmp/bdevperf.sock 00:09:35.911 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 558848 ']' 00:09:35.911 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:35.911 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.911 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:35.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:35.911 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.911 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:35.911 [2024-07-26 16:14:55.547170] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:35.911 [2024-07-26 16:14:55.547309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid558848 ] 00:09:35.911 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.168 [2024-07-26 16:14:55.675954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.428 [2024-07-26 16:14:55.937501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.995 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.995 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:36.995 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:37.253 Nvme0n1 00:09:37.253 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:37.511 [ 00:09:37.511 { 00:09:37.511 "name": "Nvme0n1", 00:09:37.511 "aliases": [ 00:09:37.511 "f30f41e5-3b54-434f-8f64-bf6c8c90f05c" 00:09:37.511 ], 00:09:37.511 "product_name": "NVMe disk", 00:09:37.511 "block_size": 4096, 00:09:37.511 "num_blocks": 38912, 00:09:37.511 "uuid": "f30f41e5-3b54-434f-8f64-bf6c8c90f05c", 00:09:37.511 "assigned_rate_limits": { 00:09:37.511 "rw_ios_per_sec": 0, 00:09:37.511 "rw_mbytes_per_sec": 0, 00:09:37.511 "r_mbytes_per_sec": 0, 00:09:37.511 "w_mbytes_per_sec": 0 00:09:37.511 }, 00:09:37.511 "claimed": false, 00:09:37.511 "zoned": false, 00:09:37.511 "supported_io_types": { 00:09:37.511 "read": true, 00:09:37.511 "write": true, 00:09:37.511 "unmap": true, 00:09:37.511 "flush": true, 00:09:37.511 "reset": true, 00:09:37.511 "nvme_admin": true, 00:09:37.511 "nvme_io": true, 00:09:37.511 "nvme_io_md": false, 00:09:37.511 "write_zeroes": true, 00:09:37.511 "zcopy": false, 00:09:37.511 "get_zone_info": false, 00:09:37.511 "zone_management": false, 00:09:37.511 "zone_append": false, 00:09:37.511 "compare": true, 00:09:37.511 "compare_and_write": true, 00:09:37.511 "abort": true, 00:09:37.511 "seek_hole": false, 00:09:37.511 "seek_data": false, 00:09:37.511 "copy": true, 00:09:37.511 "nvme_iov_md": false 00:09:37.511 }, 00:09:37.511 "memory_domains": [ 00:09:37.511 { 00:09:37.511 "dma_device_id": "system", 00:09:37.511 "dma_device_type": 1 00:09:37.511 } 00:09:37.511 ], 00:09:37.511 "driver_specific": { 00:09:37.511 "nvme": [ 00:09:37.511 { 00:09:37.511 "trid": { 00:09:37.511 "trtype": "TCP", 00:09:37.511 "adrfam": "IPv4", 00:09:37.511 "traddr": "10.0.0.2", 00:09:37.511 "trsvcid": "4420", 00:09:37.511 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:37.511 }, 00:09:37.511 "ctrlr_data": { 00:09:37.511 "cntlid": 1, 00:09:37.511 "vendor_id": "0x8086", 00:09:37.511 "model_number": "SPDK bdev Controller", 00:09:37.511 "serial_number": "SPDK0", 00:09:37.512 "firmware_revision": "24.09", 00:09:37.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:37.512 "oacs": { 00:09:37.512 "security": 0, 00:09:37.512 "format": 0, 00:09:37.512 "firmware": 0, 00:09:37.512 "ns_manage": 0 00:09:37.512 }, 00:09:37.512 "multi_ctrlr": true, 00:09:37.512 "ana_reporting": false 00:09:37.512 }, 00:09:37.512 "vs": { 00:09:37.512 "nvme_version": "1.3" 00:09:37.512 }, 00:09:37.512 "ns_data": { 00:09:37.512 "id": 1, 00:09:37.512 "can_share": true 00:09:37.512 } 00:09:37.512 } 00:09:37.512 ], 00:09:37.512 "mp_policy": "active_passive" 00:09:37.512 } 00:09:37.512 } 00:09:37.512 ] 00:09:37.512 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=559118 00:09:37.512 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:37.512 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:37.771 Running I/O for 10 seconds... 00:09:38.708 Latency(us) 00:09:38.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.708 Nvme0n1 : 1.00 11023.00 43.06 0.00 0.00 0.00 0.00 0.00 00:09:38.708 =================================================================================================================== 00:09:38.708 Total : 11023.00 43.06 0.00 0.00 0.00 0.00 0.00 00:09:38.708 00:09:39.647 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8d45e063-b4e3-47de-9779-cee7eadb2b35 00:09:39.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.647 Nvme0n1 : 2.00 11197.00 43.74 0.00 0.00 0.00 0.00 0.00 00:09:39.647 =================================================================================================================== 00:09:39.647 Total : 11197.00 43.74 0.00 0.00 0.00 0.00 0.00 00:09:39.647 00:09:39.905 true 00:09:39.905 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d45e063-b4e3-47de-9779-cee7eadb2b35 00:09:39.905 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:40.163 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:40.163 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:40.163 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 559118 00:09:40.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.731 Nvme0n1 : 3.00 11192.00 43.72 0.00 0.00 0.00 0.00 0.00 00:09:40.731 =================================================================================================================== 00:09:40.731 Total : 11192.00 43.72 0.00 0.00 0.00 0.00 0.00 00:09:40.731 00:09:41.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.666 Nvme0n1 : 4.00 11224.25 43.84 0.00 0.00 0.00 0.00 0.00 00:09:41.666 =================================================================================================================== 00:09:41.666 Total : 11224.25 43.84 0.00 0.00 0.00 0.00 0.00 00:09:41.666 00:09:42.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.605 Nvme0n1 : 5.00 11242.20 43.91 0.00 0.00 0.00 0.00 0.00 00:09:42.605 =================================================================================================================== 00:09:42.605 Total : 11242.20 43.91 0.00 0.00 0.00 0.00 0.00 00:09:42.605 00:09:43.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.983 Nvme0n1 : 6.00 11265.67 44.01 0.00 0.00 0.00 0.00 0.00 00:09:43.983 =================================================================================================================== 00:09:43.983 Total : 11265.67 44.01 0.00 0.00 0.00 0.00 0.00 00:09:43.983 00:09:44.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.921 Nvme0n1 : 7.00 11280.71 44.07 0.00 0.00 0.00 0.00 0.00 00:09:44.921 =================================================================================================================== 00:09:44.921 Total : 11280.71 44.07 0.00 0.00 0.00 0.00 0.00 00:09:44.921 00:09:45.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.859 Nvme0n1 : 8.00 11300.12 44.14 0.00 0.00 0.00 0.00 0.00 00:09:45.859 =================================================================================================================== 00:09:45.859 Total : 11300.12 44.14 0.00 0.00 0.00 0.00 0.00 00:09:45.859 00:09:46.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.798 Nvme0n1 : 9.00 11322.89 44.23 0.00 0.00 0.00 0.00 0.00 00:09:46.798 =================================================================================================================== 00:09:46.798 Total : 11322.89 44.23 0.00 0.00 0.00 0.00 0.00 00:09:46.798 00:09:47.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.735 Nvme0n1 : 10.00 11328.20 44.25 0.00 0.00 0.00 0.00 0.00 00:09:47.735 =================================================================================================================== 00:09:47.735 Total : 11328.20 44.25 0.00 0.00 0.00 0.00 0.00 00:09:47.735 00:09:47.735 00:09:47.735 Latency(us) 00:09:47.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.735 Nvme0n1 : 10.01 11329.67 44.26 0.00 0.00 11290.79 4393.34 22136.60 00:09:47.735 =================================================================================================================== 00:09:47.735 Total : 11329.67 44.26 0.00 0.00 11290.79 4393.34 22136.60 00:09:47.735 0 00:09:47.735 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 558848 00:09:47.735 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 558848 ']' 00:09:47.735 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 558848 00:09:47.735 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:47.735 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.735 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 558848 00:09:47.735 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:47.735 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:47.735 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 558848' 00:09:47.735 killing process with pid 558848 00:09:47.735 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 558848 00:09:47.735 Received shutdown signal, test time was about 10.000000 seconds 00:09:47.735 00:09:47.735 Latency(us) 00:09:47.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.735 =================================================================================================================== 00:09:47.735 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:47.735 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 558848 00:09:49.187 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:49.187 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:49.445 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d45e063-b4e3-47de-9779-cee7eadb2b35 00:09:49.445 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 556077 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 556077 00:09:49.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 556077 Killed "${NVMF_APP[@]}" "$@" 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=561080 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 561080 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 561080 ']' 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.703 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.703 [2024-07-26 16:15:09.449643] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:49.703 [2024-07-26 16:15:09.449774] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.963 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.963 [2024-07-26 16:15:09.593022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.222 [2024-07-26 16:15:09.847504] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.222 [2024-07-26 16:15:09.847589] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.222 [2024-07-26 16:15:09.847616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.222 [2024-07-26 16:15:09.847642] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.222 [2024-07-26 16:15:09.847663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.222 [2024-07-26 16:15:09.847718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.790 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.790 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:50.790 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:50.790 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:50.790 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:50.790 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.790 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:51.049 [2024-07-26 16:15:10.676690] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:51.049 [2024-07-26 16:15:10.676937] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:51.049 [2024-07-26 16:15:10.677027] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:51.049 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:51.049 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f30f41e5-3b54-434f-8f64-bf6c8c90f05c 00:09:51.049 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f30f41e5-3b54-434f-8f64-bf6c8c90f05c 00:09:51.049 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:51.049 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:51.049 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:51.049 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:51.049 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:51.309 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f30f41e5-3b54-434f-8f64-bf6c8c90f05c -t 2000 00:09:51.569 [ 00:09:51.569 { 00:09:51.569 "name": "f30f41e5-3b54-434f-8f64-bf6c8c90f05c", 00:09:51.569 "aliases": [ 00:09:51.569 "lvs/lvol" 00:09:51.569 ], 00:09:51.569 "product_name": "Logical Volume", 00:09:51.569 "block_size": 4096, 00:09:51.569 "num_blocks": 38912, 00:09:51.569 "uuid": "f30f41e5-3b54-434f-8f64-bf6c8c90f05c", 00:09:51.569 "assigned_rate_limits": { 00:09:51.569 "rw_ios_per_sec": 0, 00:09:51.569 "rw_mbytes_per_sec": 0, 00:09:51.569 "r_mbytes_per_sec": 0, 00:09:51.569 "w_mbytes_per_sec": 0 00:09:51.569 }, 00:09:51.569 "claimed": false, 00:09:51.569 "zoned": false, 00:09:51.569 "supported_io_types": { 00:09:51.569 "read": true, 00:09:51.569 "write": true, 00:09:51.569 "unmap": true, 00:09:51.569 "flush": false, 00:09:51.569 "reset": true, 00:09:51.569 "nvme_admin": false, 00:09:51.569 "nvme_io": false, 00:09:51.569 "nvme_io_md": false, 00:09:51.569 "write_zeroes": true, 00:09:51.569 "zcopy": false, 00:09:51.569 "get_zone_info": false, 00:09:51.569 "zone_management": false, 00:09:51.569 "zone_append": false, 00:09:51.569 "compare": false, 00:09:51.569 "compare_and_write": false, 00:09:51.569 "abort": false, 00:09:51.569 "seek_hole": true, 00:09:51.569 "seek_data": true, 00:09:51.569 "copy": false, 00:09:51.569 "nvme_iov_md": false 00:09:51.569 }, 00:09:51.569 "driver_specific": { 00:09:51.569 "lvol": { 00:09:51.569 "lvol_store_uuid": "8d45e063-b4e3-47de-9779-cee7eadb2b35", 00:09:51.569 "base_bdev": "aio_bdev", 00:09:51.569 "thin_provision": false, 00:09:51.569 "num_allocated_clusters": 38, 00:09:51.569 "snapshot": false, 00:09:51.569 "clone": false, 00:09:51.569 "esnap_clone": false 00:09:51.569 } 00:09:51.569 } 00:09:51.569 } 00:09:51.569 ] 00:09:51.569 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:51.569 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d45e063-b4e3-47de-9779-cee7eadb2b35 00:09:51.569 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:51.829 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:51.829 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d45e063-b4e3-47de-9779-cee7eadb2b35 00:09:51.829 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:52.088 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:52.088 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:52.348 [2024-07-26 16:15:11.973338] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:52.348 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d45e063-b4e3-47de-9779-cee7eadb2b35 00:09:52.348 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:52.348 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d45e063-b4e3-47de-9779-cee7eadb2b35 00:09:52.348 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.348 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.348 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.348 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.348 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.348 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.348 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.348 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:52.348 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d45e063-b4e3-47de-9779-cee7eadb2b35 00:09:52.607 request: 00:09:52.607 { 00:09:52.607 "uuid": "8d45e063-b4e3-47de-9779-cee7eadb2b35", 00:09:52.607 "method": "bdev_lvol_get_lvstores", 00:09:52.607 "req_id": 1 00:09:52.607 } 00:09:52.607 Got JSON-RPC error response 00:09:52.607 response: 00:09:52.607 { 00:09:52.607 "code": -19, 00:09:52.607 "message": "No such device" 00:09:52.607 } 00:09:52.607 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:52.607 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:52.607 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:52.607 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:52.607 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:52.864 aio_bdev 00:09:52.864 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f30f41e5-3b54-434f-8f64-bf6c8c90f05c 00:09:52.864 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f30f41e5-3b54-434f-8f64-bf6c8c90f05c 00:09:52.864 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:52.864 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:52.865 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:52.865 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:52.865 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:53.122 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f30f41e5-3b54-434f-8f64-bf6c8c90f05c -t 2000 00:09:53.382 [ 00:09:53.382 { 00:09:53.382 "name": "f30f41e5-3b54-434f-8f64-bf6c8c90f05c", 00:09:53.382 "aliases": [ 00:09:53.382 "lvs/lvol" 00:09:53.382 ], 00:09:53.382 "product_name": "Logical Volume", 00:09:53.382 "block_size": 4096, 00:09:53.382 "num_blocks": 38912, 00:09:53.382 "uuid": "f30f41e5-3b54-434f-8f64-bf6c8c90f05c", 00:09:53.382 "assigned_rate_limits": { 00:09:53.382 "rw_ios_per_sec": 0, 00:09:53.382 "rw_mbytes_per_sec": 0, 00:09:53.382 "r_mbytes_per_sec": 0, 00:09:53.382 "w_mbytes_per_sec": 0 00:09:53.382 }, 00:09:53.382 "claimed": false, 00:09:53.382 "zoned": false, 00:09:53.382 "supported_io_types": { 00:09:53.382 "read": true, 00:09:53.382 "write": true, 00:09:53.382 "unmap": true, 00:09:53.382 "flush": false, 00:09:53.382 "reset": true, 00:09:53.382 "nvme_admin": false, 00:09:53.382 "nvme_io": false, 00:09:53.382 "nvme_io_md": false, 00:09:53.382 "write_zeroes": true, 00:09:53.382 "zcopy": false, 00:09:53.382 "get_zone_info": false, 00:09:53.382 "zone_management": false, 00:09:53.382 "zone_append": false, 00:09:53.382 "compare": false, 00:09:53.382 "compare_and_write": false, 00:09:53.382 "abort": false, 00:09:53.382 "seek_hole": true, 00:09:53.382 "seek_data": true, 00:09:53.382 "copy": false, 00:09:53.382 "nvme_iov_md": false 00:09:53.382 }, 00:09:53.382 "driver_specific": { 00:09:53.382 "lvol": { 00:09:53.382 "lvol_store_uuid": "8d45e063-b4e3-47de-9779-cee7eadb2b35", 00:09:53.382 "base_bdev": "aio_bdev", 00:09:53.382 "thin_provision": false, 00:09:53.382 "num_allocated_clusters": 38, 00:09:53.382 "snapshot": false, 00:09:53.382 "clone": false, 00:09:53.382 "esnap_clone": false 00:09:53.382 } 00:09:53.382 } 00:09:53.382 } 00:09:53.382 ] 00:09:53.382 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:53.382 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d45e063-b4e3-47de-9779-cee7eadb2b35 00:09:53.382 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:53.642 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:53.642 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d45e063-b4e3-47de-9779-cee7eadb2b35 00:09:53.642 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:53.903 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:53.903 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f30f41e5-3b54-434f-8f64-bf6c8c90f05c 00:09:54.162 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8d45e063-b4e3-47de-9779-cee7eadb2b35 00:09:54.420 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:54.679 00:09:54.679 real 0m21.479s 00:09:54.679 user 0m53.952s 00:09:54.679 sys 0m5.025s 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:54.679 ************************************ 00:09:54.679 END TEST lvs_grow_dirty 00:09:54.679 ************************************ 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:54.679 nvmf_trace.0 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.679 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:54.679 rmmod nvme_tcp 00:09:54.679 rmmod nvme_fabrics 00:09:54.679 rmmod nvme_keyring 00:09:54.938 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:54.938 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:54.938 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:54.938 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 561080 ']' 00:09:54.938 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 561080 00:09:54.938 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 561080 ']' 00:09:54.938 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 561080 00:09:54.938 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:54.938 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.938 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 561080 00:09:54.938 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.938 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.938 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 561080' 00:09:54.938 killing process with pid 561080 00:09:54.938 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 561080 00:09:54.938 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 561080 00:09:56.320 16:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:56.320 16:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:56.320 16:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:56.320 16:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.320 16:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:56.320 16:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.320 16:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.320 16:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:58.227 00:09:58.227 real 0m47.944s 00:09:58.227 user 1m20.423s 00:09:58.227 sys 0m9.042s 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:58.227 ************************************ 00:09:58.227 END TEST nvmf_lvs_grow 00:09:58.227 ************************************ 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.227 ************************************ 00:09:58.227 START TEST nvmf_bdev_io_wait 00:09:58.227 ************************************ 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:58.227 * Looking for test storage... 00:09:58.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:58.227 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:00.766 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:00.766 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:00.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:00.767 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:00.767 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.767 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:00.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:10:00.767 00:10:00.767 --- 10.0.0.2 ping statistics --- 00:10:00.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.767 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:10:00.767 00:10:00.767 --- 10.0.0.1 ping statistics --- 00:10:00.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.767 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=563862 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 563862 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 563862 ']' 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.767 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.767 [2024-07-26 16:15:20.194392] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:00.767 [2024-07-26 16:15:20.194545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.767 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.767 [2024-07-26 16:15:20.336368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.026 [2024-07-26 16:15:20.599566] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.026 [2024-07-26 16:15:20.599651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.026 [2024-07-26 16:15:20.599680] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.026 [2024-07-26 16:15:20.599702] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.026 [2024-07-26 16:15:20.599724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.026 [2024-07-26 16:15:20.599907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.026 [2024-07-26 16:15:20.599978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.026 [2024-07-26 16:15:20.600097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.026 [2024-07-26 16:15:20.600106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.594 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.594 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:01.594 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:01.594 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:01.594 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.594 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.594 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:01.594 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.594 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.594 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.594 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:01.594 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.594 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.853 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.853 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.853 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.853 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.853 [2024-07-26 16:15:21.376544] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.853 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.854 Malloc0 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.854 [2024-07-26 16:15:21.488659] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=564027 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=564029 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:01.854 { 00:10:01.854 "params": { 00:10:01.854 "name": "Nvme$subsystem", 00:10:01.854 "trtype": "$TEST_TRANSPORT", 00:10:01.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.854 "adrfam": "ipv4", 00:10:01.854 "trsvcid": "$NVMF_PORT", 00:10:01.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.854 "hdgst": ${hdgst:-false}, 00:10:01.854 "ddgst": ${ddgst:-false} 00:10:01.854 }, 00:10:01.854 "method": "bdev_nvme_attach_controller" 00:10:01.854 } 00:10:01.854 EOF 00:10:01.854 )") 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=564031 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:01.854 { 00:10:01.854 "params": { 00:10:01.854 "name": "Nvme$subsystem", 00:10:01.854 "trtype": "$TEST_TRANSPORT", 00:10:01.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.854 "adrfam": "ipv4", 00:10:01.854 "trsvcid": "$NVMF_PORT", 00:10:01.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.854 "hdgst": ${hdgst:-false}, 00:10:01.854 "ddgst": ${ddgst:-false} 00:10:01.854 }, 00:10:01.854 "method": "bdev_nvme_attach_controller" 00:10:01.854 } 00:10:01.854 EOF 00:10:01.854 )") 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=564034 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:01.854 { 00:10:01.854 "params": { 00:10:01.854 "name": "Nvme$subsystem", 00:10:01.854 "trtype": "$TEST_TRANSPORT", 00:10:01.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.854 "adrfam": "ipv4", 00:10:01.854 "trsvcid": "$NVMF_PORT", 00:10:01.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.854 "hdgst": ${hdgst:-false}, 00:10:01.854 "ddgst": ${ddgst:-false} 00:10:01.854 }, 00:10:01.854 "method": "bdev_nvme_attach_controller" 00:10:01.854 } 00:10:01.854 EOF 00:10:01.854 )") 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:01.854 { 00:10:01.854 "params": { 00:10:01.854 "name": "Nvme$subsystem", 00:10:01.854 "trtype": "$TEST_TRANSPORT", 00:10:01.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.854 "adrfam": "ipv4", 00:10:01.854 "trsvcid": "$NVMF_PORT", 00:10:01.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.854 "hdgst": ${hdgst:-false}, 00:10:01.854 "ddgst": ${ddgst:-false} 00:10:01.854 }, 00:10:01.854 "method": "bdev_nvme_attach_controller" 00:10:01.854 } 00:10:01.854 EOF 00:10:01.854 )") 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 564027 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:01.854 "params": { 00:10:01.854 "name": "Nvme1", 00:10:01.854 "trtype": "tcp", 00:10:01.854 "traddr": "10.0.0.2", 00:10:01.854 "adrfam": "ipv4", 00:10:01.854 "trsvcid": "4420", 00:10:01.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.854 "hdgst": false, 00:10:01.854 "ddgst": false 00:10:01.854 }, 00:10:01.854 "method": "bdev_nvme_attach_controller" 00:10:01.854 }' 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:01.854 "params": { 00:10:01.854 "name": "Nvme1", 00:10:01.854 "trtype": "tcp", 00:10:01.854 "traddr": "10.0.0.2", 00:10:01.854 "adrfam": "ipv4", 00:10:01.854 "trsvcid": "4420", 00:10:01.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.854 "hdgst": false, 00:10:01.854 "ddgst": false 00:10:01.854 }, 00:10:01.854 "method": "bdev_nvme_attach_controller" 00:10:01.854 }' 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:01.854 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:01.854 "params": { 00:10:01.854 "name": "Nvme1", 00:10:01.854 "trtype": "tcp", 00:10:01.854 "traddr": "10.0.0.2", 00:10:01.854 "adrfam": "ipv4", 00:10:01.854 "trsvcid": "4420", 00:10:01.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.854 "hdgst": false, 00:10:01.854 "ddgst": false 00:10:01.854 }, 00:10:01.854 "method": "bdev_nvme_attach_controller" 00:10:01.854 }' 00:10:01.855 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:01.855 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:01.855 "params": { 00:10:01.855 "name": "Nvme1", 00:10:01.855 "trtype": "tcp", 00:10:01.855 "traddr": "10.0.0.2", 00:10:01.855 "adrfam": "ipv4", 00:10:01.855 "trsvcid": "4420", 00:10:01.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.855 "hdgst": false, 00:10:01.855 "ddgst": false 00:10:01.855 }, 00:10:01.855 "method": "bdev_nvme_attach_controller" 00:10:01.855 }' 00:10:01.855 [2024-07-26 16:15:21.572273] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:01.855 [2024-07-26 16:15:21.572273] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:01.855 [2024-07-26 16:15:21.572437] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 16:15:21.572437] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:01.855 --proc-type=auto ] 00:10:01.855 [2024-07-26 16:15:21.575180] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:01.855 [2024-07-26 16:15:21.575179] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:01.855 [2024-07-26 16:15:21.575326] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 16:15:21.575328] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:01.855 --proc-type=auto ] 00:10:02.113 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.113 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.113 [2024-07-26 16:15:21.818533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.113 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.371 [2024-07-26 16:15:21.923409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.371 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.371 [2024-07-26 16:15:21.996278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.371 [2024-07-26 16:15:22.048471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:02.371 [2024-07-26 16:15:22.072259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.631 [2024-07-26 16:15:22.149098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:02.631 [2024-07-26 16:15:22.213758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:02.631 [2024-07-26 16:15:22.290938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:02.889 Running I/O for 1 seconds... 00:10:03.148 Running I/O for 1 seconds... 00:10:03.148 Running I/O for 1 seconds... 00:10:03.148 Running I/O for 1 seconds... 00:10:03.716 00:10:03.716 Latency(us) 00:10:03.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.716 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:03.716 Nvme1n1 : 1.03 5783.32 22.59 0.00 0.00 21743.09 4393.34 39030.33 00:10:03.716 =================================================================================================================== 00:10:03.716 Total : 5783.32 22.59 0.00 0.00 21743.09 4393.34 39030.33 00:10:03.975 00:10:03.975 Latency(us) 00:10:03.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.975 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:03.975 Nvme1n1 : 1.00 148047.45 578.31 0.00 0.00 861.36 332.23 1243.97 00:10:03.975 =================================================================================================================== 00:10:03.975 Total : 148047.45 578.31 0.00 0.00 861.36 332.23 1243.97 00:10:03.975 00:10:03.975 Latency(us) 00:10:03.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.975 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:03.975 Nvme1n1 : 1.01 5869.85 22.93 0.00 0.00 21716.08 7136.14 52040.44 00:10:03.975 =================================================================================================================== 00:10:03.975 Total : 5869.85 22.93 0.00 0.00 21716.08 7136.14 52040.44 00:10:04.234 00:10:04.234 Latency(us) 00:10:04.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.234 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:04.234 Nvme1n1 : 1.01 7083.80 27.67 0.00 0.00 17949.49 7378.87 25437.68 00:10:04.234 =================================================================================================================== 00:10:04.234 Total : 7083.80 27.67 0.00 0.00 17949.49 7378.87 25437.68 00:10:04.823 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 564029 00:10:05.089 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 564031 00:10:05.089 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 564034 00:10:05.089 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.089 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.089 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.089 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.089 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:05.089 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:05.089 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:05.089 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:05.089 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.089 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:05.089 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.089 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.089 rmmod nvme_tcp 00:10:05.349 rmmod nvme_fabrics 00:10:05.349 rmmod nvme_keyring 00:10:05.349 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.349 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:05.349 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:05.349 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 563862 ']' 00:10:05.349 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 563862 00:10:05.349 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 563862 ']' 00:10:05.349 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 563862 00:10:05.349 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:05.349 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:05.349 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 563862 00:10:05.349 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:05.349 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:05.349 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 563862' 00:10:05.349 killing process with pid 563862 00:10:05.349 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 563862 00:10:05.349 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 563862 00:10:06.728 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.728 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.728 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.728 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.728 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.728 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.728 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.728 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.633 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:08.633 00:10:08.633 real 0m10.256s 00:10:08.633 user 0m31.153s 00:10:08.633 sys 0m4.034s 00:10:08.633 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.633 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.633 ************************************ 00:10:08.633 END TEST nvmf_bdev_io_wait 00:10:08.633 ************************************ 00:10:08.633 16:15:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:08.633 16:15:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:08.634 ************************************ 00:10:08.634 START TEST nvmf_queue_depth 00:10:08.634 ************************************ 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:08.634 * Looking for test storage... 00:10:08.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:10:08.634 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:10.541 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:10.541 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:10.541 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:10.541 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:10.542 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:10.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:10:10.542 00:10:10.542 --- 10.0.0.2 ping statistics --- 00:10:10.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.542 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:10.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:10:10.542 00:10:10.542 --- 10.0.0.1 ping statistics --- 00:10:10.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.542 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:10.542 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:10.803 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:10.803 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:10.803 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:10.803 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:10.803 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=566521 00:10:10.803 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:10.803 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 566521 00:10:10.803 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 566521 ']' 00:10:10.803 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.803 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.803 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.803 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.803 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:10.803 [2024-07-26 16:15:30.412208] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:10.803 [2024-07-26 16:15:30.412355] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.803 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.803 [2024-07-26 16:15:30.547358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.063 [2024-07-26 16:15:30.799159] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.063 [2024-07-26 16:15:30.799238] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.063 [2024-07-26 16:15:30.799260] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.063 [2024-07-26 16:15:30.799281] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.063 [2024-07-26 16:15:30.799299] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.063 [2024-07-26 16:15:30.799361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.633 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.633 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:11.633 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:11.633 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:11.633 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.633 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.633 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.633 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.633 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.633 [2024-07-26 16:15:31.362690] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.633 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.633 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:11.633 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.633 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.891 Malloc0 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.891 [2024-07-26 16:15:31.475006] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=566678 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 566678 /var/tmp/bdevperf.sock 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 566678 ']' 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:11.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.891 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.891 [2024-07-26 16:15:31.557496] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:11.891 [2024-07-26 16:15:31.557651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid566678 ] 00:10:11.891 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.149 [2024-07-26 16:15:31.687120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.407 [2024-07-26 16:15:31.943658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.972 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.972 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:12.972 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:12.972 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.972 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.972 NVMe0n1 00:10:12.973 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.973 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:13.231 Running I/O for 10 seconds... 00:10:23.214 00:10:23.214 Latency(us) 00:10:23.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:23.214 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:23.214 Verification LBA range: start 0x0 length 0x4000 00:10:23.214 NVMe0n1 : 10.13 6129.51 23.94 0.00 0.00 165976.21 24078.41 102527.43 00:10:23.214 =================================================================================================================== 00:10:23.214 Total : 6129.51 23.94 0.00 0.00 165976.21 24078.41 102527.43 00:10:23.214 0 00:10:23.214 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 566678 00:10:23.214 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 566678 ']' 00:10:23.214 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 566678 00:10:23.214 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:23.214 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:23.214 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 566678 00:10:23.214 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:23.214 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:23.214 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 566678' 00:10:23.214 killing process with pid 566678 00:10:23.214 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 566678 00:10:23.214 Received shutdown signal, test time was about 10.000000 seconds 00:10:23.214 00:10:23.214 Latency(us) 00:10:23.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:23.214 =================================================================================================================== 00:10:23.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:23.214 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 566678 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:24.628 rmmod nvme_tcp 00:10:24.628 rmmod nvme_fabrics 00:10:24.628 rmmod nvme_keyring 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 566521 ']' 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 566521 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 566521 ']' 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 566521 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 566521 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 566521' 00:10:24.628 killing process with pid 566521 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 566521 00:10:24.628 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 566521 00:10:26.007 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:26.007 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:26.007 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:26.007 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:26.007 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:26.007 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.007 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.007 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.589 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:28.589 00:10:28.589 real 0m19.490s 00:10:28.589 user 0m28.005s 00:10:28.589 sys 0m3.131s 00:10:28.589 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.589 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:28.589 ************************************ 00:10:28.589 END TEST nvmf_queue_depth 00:10:28.589 ************************************ 00:10:28.589 16:15:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:28.589 16:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:28.589 16:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.589 16:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:28.589 ************************************ 00:10:28.589 START TEST nvmf_target_multipath 00:10:28.589 ************************************ 00:10:28.589 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:28.589 * Looking for test storage... 00:10:28.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.589 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.589 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:28.589 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.589 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:28.590 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:30.496 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:30.496 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:30.496 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:30.496 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:30.496 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:30.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:10:30.497 00:10:30.497 --- 10.0.0.2 ping statistics --- 00:10:30.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.497 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:10:30.497 00:10:30.497 --- 10.0.0.1 ping statistics --- 00:10:30.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.497 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:30.497 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:30.497 only one NIC for nvmf test 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:30.497 rmmod nvme_tcp 00:10:30.497 rmmod nvme_fabrics 00:10:30.497 rmmod nvme_keyring 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.497 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:32.402 00:10:32.402 real 0m4.376s 00:10:32.402 user 0m0.904s 00:10:32.402 sys 0m1.461s 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:32.402 ************************************ 00:10:32.402 END TEST nvmf_target_multipath 00:10:32.402 ************************************ 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.402 16:15:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:32.661 ************************************ 00:10:32.661 START TEST nvmf_zcopy 00:10:32.661 ************************************ 00:10:32.661 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:32.662 * Looking for test storage... 00:10:32.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:10:32.662 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.566 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:34.567 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:34.567 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:34.567 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:34.567 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:34.567 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:34.825 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:34.825 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:34.825 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:34.825 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:34.825 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:34.825 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:34.825 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:34.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:10:34.825 00:10:34.825 --- 10.0.0.2 ping statistics --- 00:10:34.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.825 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:10:34.825 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:34.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:10:34.825 00:10:34.825 --- 10.0.0.1 ping statistics --- 00:10:34.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.825 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:10:34.825 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.825 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=572136 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 572136 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 572136 ']' 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:34.826 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.826 [2024-07-26 16:15:54.527267] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:34.826 [2024-07-26 16:15:54.527441] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.085 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.085 [2024-07-26 16:15:54.659155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.345 [2024-07-26 16:15:54.913171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.345 [2024-07-26 16:15:54.913247] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.345 [2024-07-26 16:15:54.913276] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.345 [2024-07-26 16:15:54.913301] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.345 [2024-07-26 16:15:54.913323] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.346 [2024-07-26 16:15:54.913370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.915 [2024-07-26 16:15:55.470783] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.915 [2024-07-26 16:15:55.487029] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.915 malloc0 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.915 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:35.916 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.916 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.916 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.916 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:35.916 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:35.916 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:35.916 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:35.916 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:35.916 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:35.916 { 00:10:35.916 "params": { 00:10:35.916 "name": "Nvme$subsystem", 00:10:35.916 "trtype": "$TEST_TRANSPORT", 00:10:35.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:35.916 "adrfam": "ipv4", 00:10:35.916 "trsvcid": "$NVMF_PORT", 00:10:35.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:35.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:35.916 "hdgst": ${hdgst:-false}, 00:10:35.916 "ddgst": ${ddgst:-false} 00:10:35.916 }, 00:10:35.916 "method": "bdev_nvme_attach_controller" 00:10:35.916 } 00:10:35.916 EOF 00:10:35.916 )") 00:10:35.916 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:35.916 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:35.916 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:35.916 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:35.916 "params": { 00:10:35.916 "name": "Nvme1", 00:10:35.916 "trtype": "tcp", 00:10:35.916 "traddr": "10.0.0.2", 00:10:35.916 "adrfam": "ipv4", 00:10:35.916 "trsvcid": "4420", 00:10:35.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:35.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:35.916 "hdgst": false, 00:10:35.916 "ddgst": false 00:10:35.916 }, 00:10:35.916 "method": "bdev_nvme_attach_controller" 00:10:35.916 }' 00:10:35.916 [2024-07-26 16:15:55.643004] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:35.916 [2024-07-26 16:15:55.643173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid572285 ] 00:10:36.175 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.175 [2024-07-26 16:15:55.775117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.435 [2024-07-26 16:15:56.033688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.005 Running I/O for 10 seconds... 00:10:46.988 00:10:46.988 Latency(us) 00:10:46.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.988 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:46.988 Verification LBA range: start 0x0 length 0x1000 00:10:46.988 Nvme1n1 : 10.03 3986.16 31.14 0.00 0.00 32024.48 4393.34 42525.58 00:10:46.988 =================================================================================================================== 00:10:46.988 Total : 3986.16 31.14 0.00 0.00 32024.48 4393.34 42525.58 00:10:48.368 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=573736 00:10:48.368 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:48.368 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.368 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:48.368 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:48.368 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:48.368 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:48.368 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:48.368 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:48.368 { 00:10:48.368 "params": { 00:10:48.368 "name": "Nvme$subsystem", 00:10:48.368 "trtype": "$TEST_TRANSPORT", 00:10:48.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:48.368 "adrfam": "ipv4", 00:10:48.368 "trsvcid": "$NVMF_PORT", 00:10:48.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:48.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:48.368 "hdgst": ${hdgst:-false}, 00:10:48.368 "ddgst": ${ddgst:-false} 00:10:48.368 }, 00:10:48.368 "method": "bdev_nvme_attach_controller" 00:10:48.368 } 00:10:48.368 EOF 00:10:48.368 )") 00:10:48.368 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:48.368 [2024-07-26 16:16:07.758367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.368 [2024-07-26 16:16:07.758427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.368 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:48.368 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:48.368 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:48.368 "params": { 00:10:48.368 "name": "Nvme1", 00:10:48.368 "trtype": "tcp", 00:10:48.368 "traddr": "10.0.0.2", 00:10:48.368 "adrfam": "ipv4", 00:10:48.368 "trsvcid": "4420", 00:10:48.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:48.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:48.368 "hdgst": false, 00:10:48.368 "ddgst": false 00:10:48.368 }, 00:10:48.368 "method": "bdev_nvme_attach_controller" 00:10:48.368 }' 00:10:48.368 [2024-07-26 16:16:07.766242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.368 [2024-07-26 16:16:07.766279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.368 [2024-07-26 16:16:07.774269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.368 [2024-07-26 16:16:07.774305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.782289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.782325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.790300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.790337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.798353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.798393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.806364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.806399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.814357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.814390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.822420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.822454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.830412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.830448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.838450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.838485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.840310] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:48.369 [2024-07-26 16:16:07.840446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid573736 ] 00:10:48.369 [2024-07-26 16:16:07.846464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.846499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.854475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.854502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.862498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.862525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.870523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.870553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.878546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.878575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.886566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.886593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.894570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.894597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.902608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.902635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.910632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.910659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.369 [2024-07-26 16:16:07.918658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.918684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.926713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.926746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.934738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.934772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.942737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.942770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.950791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.950825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.958784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.958818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.966823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.966855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.974844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.974878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.978707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.369 [2024-07-26 16:16:07.982855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.982889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.990967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.991016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:07.998962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:07.999011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.006923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.006956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.014981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.015016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.022974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.023012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.031010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.031045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.039034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.039077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.047038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.047103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.055083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.055129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.063123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.063154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.071110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.071157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.079155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.079187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.087159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.087189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.095194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.095223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.103216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.103256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.111238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.111268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.119262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.119298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.369 [2024-07-26 16:16:08.127367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.369 [2024-07-26 16:16:08.127431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.135294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.135327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.143321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.143366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.151321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.151366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.159382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.159416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.167407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.167441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.175419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.175453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.183457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.183491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.191477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.191511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.199484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.199527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.207546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.207579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.215545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.215578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.223564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.223599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.231583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.231616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.239591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.239624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.247642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.247675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.251832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.629 [2024-07-26 16:16:08.255656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.255689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.263664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.263698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.271778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.271830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.279798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.279854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.287765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.287800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.295779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.295814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.303805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.303839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.311824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.311859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.319849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.319883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.327848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.327883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.335894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.335928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.343962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.344017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.352013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.352078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.360027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.360096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.368028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.368090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.376023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.376075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.629 [2024-07-26 16:16:08.384022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.629 [2024-07-26 16:16:08.384055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.392028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.392072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.400099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.400133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.408094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.408127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.416120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.416153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.424151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.424184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.432155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.432189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.440200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.440233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.448221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.448255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.456227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.456261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.464273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.464307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.472261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.472294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.480299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.480332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.488318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.488352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.496349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.496391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.504449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.504502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.512482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.512533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.520441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.520486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.528437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.528470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.536446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.536484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.544482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.544517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.552508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.889 [2024-07-26 16:16:08.552542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.889 [2024-07-26 16:16:08.560512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.890 [2024-07-26 16:16:08.560545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.890 [2024-07-26 16:16:08.568545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.890 [2024-07-26 16:16:08.568578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.890 [2024-07-26 16:16:08.576570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.890 [2024-07-26 16:16:08.576603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.890 [2024-07-26 16:16:08.584577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.890 [2024-07-26 16:16:08.584609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.890 [2024-07-26 16:16:08.592631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.890 [2024-07-26 16:16:08.592664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.890 [2024-07-26 16:16:08.600618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.890 [2024-07-26 16:16:08.600652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.890 [2024-07-26 16:16:08.608667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.890 [2024-07-26 16:16:08.608700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.890 [2024-07-26 16:16:08.616690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.890 [2024-07-26 16:16:08.616735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.890 [2024-07-26 16:16:08.624690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.890 [2024-07-26 16:16:08.624724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.890 [2024-07-26 16:16:08.632741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.890 [2024-07-26 16:16:08.632779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.890 [2024-07-26 16:16:08.640774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.890 [2024-07-26 16:16:08.640811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.890 [2024-07-26 16:16:08.648775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.890 [2024-07-26 16:16:08.648821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.656824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.656862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.664813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.664847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.672860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.672893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.680885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.680919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.688918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.688952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.696955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.696990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.704967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.705004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.712966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.713003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.721014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.721048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.729015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.729048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.737055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.737097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.745090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.745124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.753099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.753136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.761149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.761184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.769164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.769199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.777169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.777203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.785208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.785241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.793216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.793254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.801259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.801301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.809284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.809319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.817286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.817320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.825326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.825359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.833351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.833383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.841364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.841400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.849419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.849455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.857426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.857465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.865460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.865495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 Running I/O for 5 seconds... 00:10:49.149 [2024-07-26 16:16:08.880717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.880760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.894921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.894962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.149 [2024-07-26 16:16:08.910183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.149 [2024-07-26 16:16:08.910224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:08.925426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:08.925468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:08.940668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:08.940708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:08.956198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:08.956238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:08.971616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:08.971656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:08.986812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:08.986852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:09.002241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:09.002281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:09.017440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:09.017480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:09.032344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:09.032384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:09.047216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:09.047257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:09.062415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:09.062455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:09.077322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:09.077362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:09.092335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:09.092376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:09.107745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:09.107786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:09.122727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:09.122767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:09.137161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:09.137201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:09.152106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:09.152145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.410 [2024-07-26 16:16:09.166646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.410 [2024-07-26 16:16:09.166687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.182191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.182234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.196815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.196855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.211437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.211476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.225980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.226020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.240645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.240685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.254983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.255023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.269835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.269874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.284665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.284705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.299577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.299616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.315512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.315553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.330260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.330300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.345003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.345044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.360017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.360075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.375647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.375687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.391505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.391545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.406321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.406361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.671 [2024-07-26 16:16:09.421120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.671 [2024-07-26 16:16:09.421159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.436077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.436118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.451258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.451297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.466413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.466455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.482022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.482075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.498038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.498089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.513575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.513615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.529693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.529733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.544393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.544434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.559584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.559624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.572884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.572926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.588088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.588127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.603188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.603239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.618645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.618686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.631117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.631156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.645224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.645264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.660079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.660120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.675680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.675721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.931 [2024-07-26 16:16:09.691202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.931 [2024-07-26 16:16:09.691242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.706303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.706343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.721344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.721384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.736837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.736877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.751941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.751982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.766792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.766832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.781704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.781744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.796600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.796639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.811298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.811338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.826362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.826402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.840811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.840850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.856019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.856069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.870635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.870685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.885324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.885365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.900342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.900382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.915279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.915319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.929967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.930007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.192 [2024-07-26 16:16:09.944955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.192 [2024-07-26 16:16:09.944994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.452 [2024-07-26 16:16:09.959907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.452 [2024-07-26 16:16:09.959947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.452 [2024-07-26 16:16:09.975020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.452 [2024-07-26 16:16:09.975068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.452 [2024-07-26 16:16:09.989152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.452 [2024-07-26 16:16:09.989192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.452 [2024-07-26 16:16:10.003338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.452 [2024-07-26 16:16:10.003378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.452 [2024-07-26 16:16:10.019263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.452 [2024-07-26 16:16:10.019315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.452 [2024-07-26 16:16:10.034935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.452 [2024-07-26 16:16:10.034980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.452 [2024-07-26 16:16:10.050534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.452 [2024-07-26 16:16:10.050578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.452 [2024-07-26 16:16:10.065411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.452 [2024-07-26 16:16:10.065451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.452 [2024-07-26 16:16:10.081173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.452 [2024-07-26 16:16:10.081213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.452 [2024-07-26 16:16:10.096402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.452 [2024-07-26 16:16:10.096442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.452 [2024-07-26 16:16:10.111830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.452 [2024-07-26 16:16:10.111870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.452 [2024-07-26 16:16:10.126829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.452 [2024-07-26 16:16:10.126867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.452 [2024-07-26 16:16:10.142293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.452 [2024-07-26 16:16:10.142332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.452 [2024-07-26 16:16:10.157026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.453 [2024-07-26 16:16:10.157087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.453 [2024-07-26 16:16:10.171876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.453 [2024-07-26 16:16:10.171919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.453 [2024-07-26 16:16:10.186732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.453 [2024-07-26 16:16:10.186771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.453 [2024-07-26 16:16:10.201688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.453 [2024-07-26 16:16:10.201727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.216587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.216628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.231350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.231389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.245842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.245881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.260462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.260501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.275043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.275095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.289436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.289475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.303808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.303847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.317620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.317660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.332619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.332659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.347261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.347301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.362069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.362108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.376643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.376682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.391228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.391268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.405876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.405915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.421363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.421403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.436288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.436338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.451641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.451680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.713 [2024-07-26 16:16:10.466008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.713 [2024-07-26 16:16:10.466048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.481292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.481332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.496405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.496444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.510952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.510991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.525966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.526005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.541192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.541231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.555500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.555539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.570290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.570329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.585337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.585378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.600046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.600094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.614925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.614966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.629579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.629620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.644499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.644538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.660255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.660294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.675025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.675075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.689514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.689555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.704643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.704683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.972 [2024-07-26 16:16:10.719889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.972 [2024-07-26 16:16:10.719938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.735716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.735758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.751289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.751330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.766498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.766537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.781309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.781347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.796479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.796519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.811381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.811419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.826353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.826392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.842397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.842437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.857690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.857729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.873303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.873343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.888427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.888467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.903052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.903100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.917818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.917856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.932966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.933005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.947470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.947509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.961739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.961777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.976095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.976134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.258 [2024-07-26 16:16:10.991256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.258 [2024-07-26 16:16:10.991296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.005753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.005793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.020709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.020748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.035685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.035725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.050121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.050160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.064965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.065004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.079813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.079851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.094736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.094776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.109356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.109397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.124143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.124182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.138864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.138904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.153408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.153447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.167703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.167742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.182322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.182362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.197271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.197315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.212176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.212216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.227566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.227605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.242891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.242930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.257698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.257738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.518 [2024-07-26 16:16:11.272741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.518 [2024-07-26 16:16:11.272780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.778 [2024-07-26 16:16:11.287483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.778 [2024-07-26 16:16:11.287524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.778 [2024-07-26 16:16:11.302085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.778 [2024-07-26 16:16:11.302124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.778 [2024-07-26 16:16:11.316790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.778 [2024-07-26 16:16:11.316829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.778 [2024-07-26 16:16:11.332071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.778 [2024-07-26 16:16:11.332112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.778 [2024-07-26 16:16:11.347280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.778 [2024-07-26 16:16:11.347320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.778 [2024-07-26 16:16:11.362084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.778 [2024-07-26 16:16:11.362123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.778 [2024-07-26 16:16:11.375644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.778 [2024-07-26 16:16:11.375683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.778 [2024-07-26 16:16:11.390558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.778 [2024-07-26 16:16:11.390598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.778 [2024-07-26 16:16:11.404860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.778 [2024-07-26 16:16:11.404899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.778 [2024-07-26 16:16:11.419049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.778 [2024-07-26 16:16:11.419097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.779 [2024-07-26 16:16:11.434416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.779 [2024-07-26 16:16:11.434455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.779 [2024-07-26 16:16:11.449626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.779 [2024-07-26 16:16:11.449666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.779 [2024-07-26 16:16:11.464170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.779 [2024-07-26 16:16:11.464209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.779 [2024-07-26 16:16:11.478443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.779 [2024-07-26 16:16:11.478483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.779 [2024-07-26 16:16:11.492944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.779 [2024-07-26 16:16:11.492983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.779 [2024-07-26 16:16:11.508237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.779 [2024-07-26 16:16:11.508276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.779 [2024-07-26 16:16:11.523920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.779 [2024-07-26 16:16:11.523958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.779 [2024-07-26 16:16:11.538220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.779 [2024-07-26 16:16:11.538259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.553549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.553589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.567278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.567318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.581197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.581236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.595745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.595785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.610862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.610901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.625472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.625511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.639987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.640026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.655134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.655175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.670165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.670205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.685210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.685249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.700217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.700256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.715410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.715449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.729947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.729986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.744688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.744727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.759802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.759844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.774611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.774650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.038 [2024-07-26 16:16:11.789777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.038 [2024-07-26 16:16:11.789818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.298 [2024-07-26 16:16:11.805324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.298 [2024-07-26 16:16:11.805364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.298 [2024-07-26 16:16:11.820187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.298 [2024-07-26 16:16:11.820227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.298 [2024-07-26 16:16:11.835394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.298 [2024-07-26 16:16:11.835434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.298 [2024-07-26 16:16:11.849785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.298 [2024-07-26 16:16:11.849825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.298 [2024-07-26 16:16:11.865285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.298 [2024-07-26 16:16:11.865325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.299 [2024-07-26 16:16:11.880935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.299 [2024-07-26 16:16:11.880975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.299 [2024-07-26 16:16:11.896422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.299 [2024-07-26 16:16:11.896462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.299 [2024-07-26 16:16:11.911823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.299 [2024-07-26 16:16:11.911862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.299 [2024-07-26 16:16:11.927290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.299 [2024-07-26 16:16:11.927329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.299 [2024-07-26 16:16:11.942970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.299 [2024-07-26 16:16:11.943011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.299 [2024-07-26 16:16:11.958147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.299 [2024-07-26 16:16:11.958186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.299 [2024-07-26 16:16:11.973430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.299 [2024-07-26 16:16:11.973470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.299 [2024-07-26 16:16:11.988589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.299 [2024-07-26 16:16:11.988628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.299 [2024-07-26 16:16:12.003621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.299 [2024-07-26 16:16:12.003660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.299 [2024-07-26 16:16:12.017955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.299 [2024-07-26 16:16:12.017993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.299 [2024-07-26 16:16:12.032537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.299 [2024-07-26 16:16:12.032577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.299 [2024-07-26 16:16:12.046896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.299 [2024-07-26 16:16:12.046936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.061798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.061837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.075973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.076012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.090403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.090443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.104216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.104256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.119115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.119163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.134324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.134363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.148872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.148911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.163816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.163856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.178873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.178912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.194090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.194130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.208015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.208053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.223257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.223296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.238451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.238490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.252992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.253032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.268191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.268230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.283108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.283147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.298164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.298204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.558 [2024-07-26 16:16:12.313158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.558 [2024-07-26 16:16:12.313196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.327979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.328020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.342882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.342922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.357485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.357523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.373713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.373753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.388496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.388535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.403456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.403504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.418356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.418395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.432821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.432860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.447803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.447842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.462791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.462830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.477438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.477477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.492085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.492125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.507270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.507309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.522509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.522549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.537452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.537491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.553277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.553316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.818 [2024-07-26 16:16:12.568459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.818 [2024-07-26 16:16:12.568498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.583242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.583282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.598196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.598235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.612313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.612352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.627091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.627130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.641881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.641920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.656393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.656432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.671285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.671325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.686979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.687029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.702128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.702167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.717346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.717385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.732148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.732189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.747140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.747188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.761971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.762010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.776374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.776413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.791393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.791432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.806548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.806587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.821244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.821283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.079 [2024-07-26 16:16:12.835815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.079 [2024-07-26 16:16:12.835855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.339 [2024-07-26 16:16:12.850972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.339 [2024-07-26 16:16:12.851013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.339 [2024-07-26 16:16:12.866205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.339 [2024-07-26 16:16:12.866244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.339 [2024-07-26 16:16:12.881491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.339 [2024-07-26 16:16:12.881531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.339 [2024-07-26 16:16:12.896837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.339 [2024-07-26 16:16:12.896878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.339 [2024-07-26 16:16:12.912216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.339 [2024-07-26 16:16:12.912255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.339 [2024-07-26 16:16:12.925055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.339 [2024-07-26 16:16:12.925114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.339 [2024-07-26 16:16:12.939463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.339 [2024-07-26 16:16:12.939503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.339 [2024-07-26 16:16:12.955131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.340 [2024-07-26 16:16:12.955172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.340 [2024-07-26 16:16:12.970758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.340 [2024-07-26 16:16:12.970807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.340 [2024-07-26 16:16:12.986040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.340 [2024-07-26 16:16:12.986092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.340 [2024-07-26 16:16:13.002209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.340 [2024-07-26 16:16:13.002249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.340 [2024-07-26 16:16:13.017701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.340 [2024-07-26 16:16:13.017740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.340 [2024-07-26 16:16:13.033238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.340 [2024-07-26 16:16:13.033278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.340 [2024-07-26 16:16:13.048120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.340 [2024-07-26 16:16:13.048160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.340 [2024-07-26 16:16:13.063711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.340 [2024-07-26 16:16:13.063751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.340 [2024-07-26 16:16:13.078889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.340 [2024-07-26 16:16:13.078928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.340 [2024-07-26 16:16:13.093922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.340 [2024-07-26 16:16:13.093961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.107528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.107568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.121748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.121787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.137045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.137096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.151689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.151728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.166410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.166449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.180911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.180950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.195743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.195781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.210457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.210495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.226123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.226162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.241597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.241635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.256818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.256858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.271213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.271253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.285925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.285964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.300932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.300971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.315914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.315953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.330727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.330766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.345505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.345544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.600 [2024-07-26 16:16:13.360260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.600 [2024-07-26 16:16:13.360299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.375658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.375697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.389164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.389204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.403908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.403947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.418868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.418907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.433931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.433970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.448822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.448862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.463633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.463673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.478282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.478323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.493130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.493169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.508254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.508292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.522692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.522731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.537331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.537382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.552385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.552425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.566856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.566895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.582270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.582309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.597205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.597244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.859 [2024-07-26 16:16:13.612616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.859 [2024-07-26 16:16:13.612655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.627911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.627951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.643310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.643351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.658317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.658360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.672918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.672956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.687610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.687650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.702301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.702340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.717035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.717083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.731206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.731264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.746348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.746392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.761184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.761224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.776417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.776456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.792106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.792145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.807354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.807393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.822198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.822237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.837851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.837889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.853293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.853333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.118 [2024-07-26 16:16:13.868838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.118 [2024-07-26 16:16:13.868878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:13.883907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:13.883948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:13.895943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:13.895982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 00:10:54.377 Latency(us) 00:10:54.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:54.377 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:54.377 Nvme1n1 : 5.02 8478.83 66.24 0.00 0.00 15068.73 4975.88 24272.59 00:10:54.377 =================================================================================================================== 00:10:54.377 Total : 8478.83 66.24 0.00 0.00 15068.73 4975.88 24272.59 00:10:54.377 [2024-07-26 16:16:13.903682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:13.903720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:13.911719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:13.911758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:13.919740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:13.919775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:13.927714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:13.927748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:13.935753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:13.935787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:13.943761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:13.943796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:13.951942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:13.952007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:13.959948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:13.960012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:13.967828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:13.967862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:13.975872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:13.975905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:13.983905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:13.983939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:13.991903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:13.991936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:13.999956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:13.999989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.007939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.007971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.015983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.016016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.024027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.024071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.032009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.032043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.040167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.040225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.048197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.048257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.056095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.056129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.064141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.064176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.072141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.072174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.080173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.080207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.088197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.088230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.096227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.096259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.104246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.104279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.112265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.112299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.120262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.120294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.128304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.128345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.377 [2024-07-26 16:16:14.136310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.377 [2024-07-26 16:16:14.136351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.144366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.144403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.152375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.152410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.160377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.160412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.168426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.168459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.176439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.176472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.184442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.184474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.192517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.192557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.200592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.200652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.208594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.208639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.216564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.216597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.224563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.224596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.232599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.232632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.240623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.240656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.248632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.248664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.256677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.256711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.264779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.264841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.272861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.272923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.637 [2024-07-26 16:16:14.280884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.637 [2024-07-26 16:16:14.280958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.638 [2024-07-26 16:16:14.288833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.638 [2024-07-26 16:16:14.288882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.638 [2024-07-26 16:16:14.296793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.638 [2024-07-26 16:16:14.296826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.638 [2024-07-26 16:16:14.304808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.638 [2024-07-26 16:16:14.304840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.638 [2024-07-26 16:16:14.312817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.638 [2024-07-26 16:16:14.312849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.638 [2024-07-26 16:16:14.320856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.638 [2024-07-26 16:16:14.320888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.638 [2024-07-26 16:16:14.328857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.638 [2024-07-26 16:16:14.328889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.638 [2024-07-26 16:16:14.336897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.638 [2024-07-26 16:16:14.336928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.638 [2024-07-26 16:16:14.344918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.638 [2024-07-26 16:16:14.344950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.638 [2024-07-26 16:16:14.352938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.638 [2024-07-26 16:16:14.352970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.638 [2024-07-26 16:16:14.360978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.638 [2024-07-26 16:16:14.361012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.638 [2024-07-26 16:16:14.368995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.638 [2024-07-26 16:16:14.369027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.638 [2024-07-26 16:16:14.377003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.638 [2024-07-26 16:16:14.377035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.638 [2024-07-26 16:16:14.385074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.638 [2024-07-26 16:16:14.385107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.638 [2024-07-26 16:16:14.393047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.638 [2024-07-26 16:16:14.393088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.898 [2024-07-26 16:16:14.401108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.898 [2024-07-26 16:16:14.401147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.898 [2024-07-26 16:16:14.409123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.898 [2024-07-26 16:16:14.409158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.898 [2024-07-26 16:16:14.417131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.898 [2024-07-26 16:16:14.417164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.898 [2024-07-26 16:16:14.425175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.898 [2024-07-26 16:16:14.425208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.898 [2024-07-26 16:16:14.433197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.898 [2024-07-26 16:16:14.433240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.898 [2024-07-26 16:16:14.441190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.898 [2024-07-26 16:16:14.441223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.898 [2024-07-26 16:16:14.449319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.898 [2024-07-26 16:16:14.449376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.898 [2024-07-26 16:16:14.457343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.898 [2024-07-26 16:16:14.457404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.898 [2024-07-26 16:16:14.465293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.898 [2024-07-26 16:16:14.465326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.898 [2024-07-26 16:16:14.473303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.473335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.481342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.481374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.489344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.489375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.497373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.497405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.505381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.505414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.513417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.513449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.521425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.521458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.529463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.529495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.537500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.537534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.545494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.545526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.553535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.553567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.561556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.561588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.569568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.569601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.577717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.577769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.585673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.585734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.593674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.593706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.601678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.601710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.609675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.609709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.617723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.617755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.625741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.625773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.633741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.633773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.641786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.641817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.649786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.649818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.899 [2024-07-26 16:16:14.657844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.899 [2024-07-26 16:16:14.657881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.665859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.665895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.673949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.674000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.681985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.682040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.689919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.689952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.697921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.697953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.705973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.706005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.713977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.714008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.722005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.722038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.730029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.730070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.738035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.738078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.746090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.746130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.754107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.754140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.762112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.762144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.770171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.770203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.778159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.778192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.786254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.786301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.794272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.158 [2024-07-26 16:16:14.794320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.158 [2024-07-26 16:16:14.802224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.159 [2024-07-26 16:16:14.802257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.159 [2024-07-26 16:16:14.810260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.159 [2024-07-26 16:16:14.810293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.159 [2024-07-26 16:16:14.818283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.159 [2024-07-26 16:16:14.818315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.159 [2024-07-26 16:16:14.826289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.159 [2024-07-26 16:16:14.826321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.159 [2024-07-26 16:16:14.834346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.159 [2024-07-26 16:16:14.834383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.159 [2024-07-26 16:16:14.842419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.159 [2024-07-26 16:16:14.842476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.159 [2024-07-26 16:16:14.850382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.159 [2024-07-26 16:16:14.850415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.159 [2024-07-26 16:16:14.858406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.159 [2024-07-26 16:16:14.858438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.159 [2024-07-26 16:16:14.866439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.159 [2024-07-26 16:16:14.866472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.159 [2024-07-26 16:16:14.874451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.159 [2024-07-26 16:16:14.874483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.159 [2024-07-26 16:16:14.882472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.159 [2024-07-26 16:16:14.882505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.159 [2024-07-26 16:16:14.890471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.159 [2024-07-26 16:16:14.890503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.159 [2024-07-26 16:16:14.898518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.159 [2024-07-26 16:16:14.898551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.159 [2024-07-26 16:16:14.906554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.159 [2024-07-26 16:16:14.906586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.159 [2024-07-26 16:16:14.914562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.159 [2024-07-26 16:16:14.914599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.417 [2024-07-26 16:16:14.922583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.417 [2024-07-26 16:16:14.922619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.417 [2024-07-26 16:16:14.930588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.417 [2024-07-26 16:16:14.930622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.417 [2024-07-26 16:16:14.938646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.417 [2024-07-26 16:16:14.938679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.417 [2024-07-26 16:16:14.946659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.417 [2024-07-26 16:16:14.946692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.417 [2024-07-26 16:16:14.954651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.417 [2024-07-26 16:16:14.954684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.417 [2024-07-26 16:16:14.962712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.417 [2024-07-26 16:16:14.962746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.417 [2024-07-26 16:16:14.970706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.417 [2024-07-26 16:16:14.970741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.417 [2024-07-26 16:16:14.978739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.417 [2024-07-26 16:16:14.978772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.417 [2024-07-26 16:16:14.986772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.417 [2024-07-26 16:16:14.986806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (573736) - No such process 00:10:55.417 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 573736 00:10:55.417 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.417 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.417 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.417 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.417 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:55.417 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.417 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.417 delay0 00:10:55.417 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.417 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:55.417 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.417 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.417 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.417 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:55.417 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.417 [2024-07-26 16:16:15.125946] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:03.538 Initializing NVMe Controllers 00:11:03.538 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:03.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:03.538 Initialization complete. Launching workers. 00:11:03.538 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 15276 00:11:03.538 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 15390, failed to submit 123 00:11:03.538 success 15294, unsuccess 96, failed 0 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:03.538 rmmod nvme_tcp 00:11:03.538 rmmod nvme_fabrics 00:11:03.538 rmmod nvme_keyring 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 572136 ']' 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 572136 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 572136 ']' 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 572136 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 572136 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 572136' 00:11:03.538 killing process with pid 572136 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 572136 00:11:03.538 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 572136 00:11:04.106 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:04.106 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:04.106 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:04.106 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.106 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:04.106 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.106 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.106 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:06.647 00:11:06.647 real 0m33.623s 00:11:06.647 user 0m49.138s 00:11:06.647 sys 0m9.580s 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:06.647 ************************************ 00:11:06.647 END TEST nvmf_zcopy 00:11:06.647 ************************************ 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:06.647 ************************************ 00:11:06.647 START TEST nvmf_nmic 00:11:06.647 ************************************ 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:06.647 * Looking for test storage... 00:11:06.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:11:06.647 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.552 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:08.553 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:08.553 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:08.553 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:08.553 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:08.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:11:08.553 00:11:08.553 --- 10.0.0.2 ping statistics --- 00:11:08.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.553 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:11:08.553 00:11:08.553 --- 10.0.0.1 ping statistics --- 00:11:08.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.553 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=577520 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 577520 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 577520 ']' 00:11:08.553 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.554 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.554 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.554 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.554 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:08.554 [2024-07-26 16:16:28.069994] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:08.554 [2024-07-26 16:16:28.070173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.554 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.554 [2024-07-26 16:16:28.212959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.813 [2024-07-26 16:16:28.480823] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.813 [2024-07-26 16:16:28.480912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.813 [2024-07-26 16:16:28.480940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.813 [2024-07-26 16:16:28.480962] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.813 [2024-07-26 16:16:28.480984] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.813 [2024-07-26 16:16:28.481121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.813 [2024-07-26 16:16:28.481182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.813 [2024-07-26 16:16:28.481232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.813 [2024-07-26 16:16:28.481243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.382 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.382 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:09.382 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.382 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.382 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.382 [2024-07-26 16:16:29.015314] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.382 Malloc0 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.382 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.383 [2024-07-26 16:16:29.122641] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:09.383 test case1: single bdev can't be used in multiple subsystems 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.383 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.643 [2024-07-26 16:16:29.146357] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:09.643 [2024-07-26 16:16:29.146418] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:09.643 [2024-07-26 16:16:29.146449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.643 request: 00:11:09.643 { 00:11:09.643 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:09.643 "namespace": { 00:11:09.643 "bdev_name": "Malloc0", 00:11:09.643 "no_auto_visible": false 00:11:09.643 }, 00:11:09.643 "method": "nvmf_subsystem_add_ns", 00:11:09.643 "req_id": 1 00:11:09.643 } 00:11:09.643 Got JSON-RPC error response 00:11:09.643 response: 00:11:09.643 { 00:11:09.643 "code": -32602, 00:11:09.643 "message": "Invalid parameters" 00:11:09.643 } 00:11:09.643 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:09.643 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:09.643 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:09.643 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:09.643 Adding namespace failed - expected result. 00:11:09.643 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:09.643 test case2: host connect to nvmf target in multiple paths 00:11:09.643 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:09.643 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.643 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.643 [2024-07-26 16:16:29.154544] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:09.643 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.643 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:10.210 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:10.777 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:10.777 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:10.777 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.777 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:10.777 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:13.343 16:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:13.343 16:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:13.343 16:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.343 16:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:13.343 16:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.343 16:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:13.343 16:16:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:13.343 [global] 00:11:13.343 thread=1 00:11:13.343 invalidate=1 00:11:13.343 rw=write 00:11:13.343 time_based=1 00:11:13.343 runtime=1 00:11:13.343 ioengine=libaio 00:11:13.343 direct=1 00:11:13.343 bs=4096 00:11:13.343 iodepth=1 00:11:13.343 norandommap=0 00:11:13.343 numjobs=1 00:11:13.343 00:11:13.343 verify_dump=1 00:11:13.343 verify_backlog=512 00:11:13.343 verify_state_save=0 00:11:13.343 do_verify=1 00:11:13.343 verify=crc32c-intel 00:11:13.343 [job0] 00:11:13.343 filename=/dev/nvme0n1 00:11:13.343 Could not set queue depth (nvme0n1) 00:11:13.343 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.343 fio-3.35 00:11:13.343 Starting 1 thread 00:11:14.282 00:11:14.282 job0: (groupid=0, jobs=1): err= 0: pid=578169: Fri Jul 26 16:16:33 2024 00:11:14.282 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:11:14.282 slat (nsec): min=7104, max=38418, avg=26671.55, stdev=10011.35 00:11:14.282 clat (usec): min=40874, max=41083, avg=40959.88, stdev=54.59 00:11:14.282 lat (usec): min=40881, max=41100, avg=40986.55, stdev=51.44 00:11:14.282 clat percentiles (usec): 00:11:14.282 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:14.282 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:14.282 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:14.282 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:14.282 | 99.99th=[41157] 00:11:14.282 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:11:14.282 slat (nsec): min=7020, max=30546, avg=7927.35, stdev=1631.79 00:11:14.282 clat (usec): min=203, max=405, avg=225.20, stdev=14.11 00:11:14.282 lat (usec): min=210, max=431, avg=233.13, stdev=14.73 00:11:14.282 clat percentiles (usec): 00:11:14.282 | 1.00th=[ 208], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 215], 00:11:14.282 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 227], 00:11:14.282 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 241], 95.00th=[ 245], 00:11:14.282 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 408], 99.95th=[ 408], 00:11:14.282 | 99.99th=[ 408] 00:11:14.282 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:14.283 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:14.283 lat (usec) : 250=92.51%, 500=3.37% 00:11:14.283 lat (msec) : 50=4.12% 00:11:14.283 cpu : usr=0.39%, sys=0.49%, ctx=534, majf=0, minf=2 00:11:14.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.283 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.283 00:11:14.283 Run status group 0 (all jobs): 00:11:14.283 READ: bw=86.0KiB/s (88.1kB/s), 86.0KiB/s-86.0KiB/s (88.1kB/s-88.1kB/s), io=88.0KiB (90.1kB), run=1023-1023msec 00:11:14.283 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:11:14.283 00:11:14.283 Disk stats (read/write): 00:11:14.283 nvme0n1: ios=68/512, merge=0/0, ticks=790/108, in_queue=898, util=92.38% 00:11:14.283 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:14.541 rmmod nvme_tcp 00:11:14.541 rmmod nvme_fabrics 00:11:14.541 rmmod nvme_keyring 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 577520 ']' 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 577520 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 577520 ']' 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 577520 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 577520 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 577520' 00:11:14.541 killing process with pid 577520 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 577520 00:11:14.541 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 577520 00:11:16.447 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:16.447 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:16.447 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:16.447 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:16.447 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:16.447 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.447 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.447 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:18.354 00:11:18.354 real 0m11.915s 00:11:18.354 user 0m28.298s 00:11:18.354 sys 0m2.507s 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:18.354 ************************************ 00:11:18.354 END TEST nvmf_nmic 00:11:18.354 ************************************ 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:18.354 ************************************ 00:11:18.354 START TEST nvmf_fio_target 00:11:18.354 ************************************ 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:18.354 * Looking for test storage... 00:11:18.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.354 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:11:18.355 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:20.262 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:20.262 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:20.262 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:20.263 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:20.263 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:20.263 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:20.263 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:20.263 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:20.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:11:20.522 00:11:20.522 --- 10.0.0.2 ping statistics --- 00:11:20.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.522 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:20.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:11:20.522 00:11:20.522 --- 10.0.0.1 ping statistics --- 00:11:20.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.522 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=580382 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 580382 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 580382 ']' 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:20.522 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.522 [2024-07-26 16:16:40.179821] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:20.522 [2024-07-26 16:16:40.179978] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.522 EAL: No free 2048 kB hugepages reported on node 1 00:11:20.782 [2024-07-26 16:16:40.324548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.043 [2024-07-26 16:16:40.592278] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.043 [2024-07-26 16:16:40.592362] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.043 [2024-07-26 16:16:40.592391] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.043 [2024-07-26 16:16:40.592413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.043 [2024-07-26 16:16:40.592443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.043 [2024-07-26 16:16:40.592592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.043 [2024-07-26 16:16:40.592651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.043 [2024-07-26 16:16:40.592944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.043 [2024-07-26 16:16:40.592954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.610 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.610 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:21.610 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:21.610 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:21.610 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.610 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.610 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:21.610 [2024-07-26 16:16:41.340543] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.610 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:22.180 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:22.180 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:22.439 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:22.439 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:22.697 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:22.697 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:22.955 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:22.955 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:23.213 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:23.779 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:23.779 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:24.038 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:24.038 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:24.296 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:24.296 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:24.554 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:24.813 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:24.813 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:25.071 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:25.071 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:25.328 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.585 [2024-07-26 16:16:45.162683] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.585 16:16:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:25.843 16:16:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:26.101 16:16:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:26.669 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:26.669 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:26.669 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.669 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:26.669 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:26.669 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:29.204 16:16:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:29.204 16:16:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:29.204 16:16:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.204 16:16:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:29.204 16:16:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.204 16:16:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:29.204 16:16:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:29.204 [global] 00:11:29.204 thread=1 00:11:29.205 invalidate=1 00:11:29.205 rw=write 00:11:29.205 time_based=1 00:11:29.205 runtime=1 00:11:29.205 ioengine=libaio 00:11:29.205 direct=1 00:11:29.205 bs=4096 00:11:29.205 iodepth=1 00:11:29.205 norandommap=0 00:11:29.205 numjobs=1 00:11:29.205 00:11:29.205 verify_dump=1 00:11:29.205 verify_backlog=512 00:11:29.205 verify_state_save=0 00:11:29.205 do_verify=1 00:11:29.205 verify=crc32c-intel 00:11:29.205 [job0] 00:11:29.205 filename=/dev/nvme0n1 00:11:29.205 [job1] 00:11:29.205 filename=/dev/nvme0n2 00:11:29.205 [job2] 00:11:29.205 filename=/dev/nvme0n3 00:11:29.205 [job3] 00:11:29.205 filename=/dev/nvme0n4 00:11:29.205 Could not set queue depth (nvme0n1) 00:11:29.205 Could not set queue depth (nvme0n2) 00:11:29.205 Could not set queue depth (nvme0n3) 00:11:29.205 Could not set queue depth (nvme0n4) 00:11:29.205 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.205 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.205 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.205 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.205 fio-3.35 00:11:29.205 Starting 4 threads 00:11:30.142 00:11:30.142 job0: (groupid=0, jobs=1): err= 0: pid=581585: Fri Jul 26 16:16:49 2024 00:11:30.142 read: IOPS=977, BW=3908KiB/s (4002kB/s)(3928KiB/1005msec) 00:11:30.142 slat (nsec): min=5432, max=68606, avg=20456.20, stdev=11634.36 00:11:30.142 clat (usec): min=325, max=41201, avg=647.38, stdev=2721.89 00:11:30.142 lat (usec): min=332, max=41235, avg=667.83, stdev=2722.89 00:11:30.142 clat percentiles (usec): 00:11:30.142 | 1.00th=[ 334], 5.00th=[ 355], 10.00th=[ 371], 20.00th=[ 404], 00:11:30.142 | 30.00th=[ 416], 40.00th=[ 429], 50.00th=[ 449], 60.00th=[ 469], 00:11:30.142 | 70.00th=[ 478], 80.00th=[ 494], 90.00th=[ 529], 95.00th=[ 562], 00:11:30.142 | 99.00th=[ 611], 99.50th=[26346], 99.90th=[41157], 99.95th=[41157], 00:11:30.142 | 99.99th=[41157] 00:11:30.142 write: IOPS=1018, BW=4076KiB/s (4173kB/s)(4096KiB/1005msec); 0 zone resets 00:11:30.142 slat (nsec): min=6408, max=77218, avg=17538.24, stdev=10868.10 00:11:30.142 clat (usec): min=206, max=4063, avg=312.66, stdev=175.53 00:11:30.142 lat (usec): min=215, max=4073, avg=330.19, stdev=178.19 00:11:30.142 clat percentiles (usec): 00:11:30.142 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 241], 00:11:30.142 | 30.00th=[ 251], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 310], 00:11:30.142 | 70.00th=[ 338], 80.00th=[ 371], 90.00th=[ 408], 95.00th=[ 433], 00:11:30.142 | 99.00th=[ 515], 99.50th=[ 644], 99.90th=[ 2966], 99.95th=[ 4047], 00:11:30.142 | 99.99th=[ 4047] 00:11:30.142 bw ( KiB/s): min= 2800, max= 5392, per=25.73%, avg=4096.00, stdev=1832.82, samples=2 00:11:30.142 iops : min= 700, max= 1348, avg=1024.00, stdev=458.21, samples=2 00:11:30.142 lat (usec) : 250=14.41%, 500=76.17%, 750=8.87% 00:11:30.142 lat (msec) : 2=0.10%, 4=0.10%, 10=0.10%, 50=0.25% 00:11:30.142 cpu : usr=2.09%, sys=4.08%, ctx=2008, majf=0, minf=2 00:11:30.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.142 issued rwts: total=982,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.142 job1: (groupid=0, jobs=1): err= 0: pid=581586: Fri Jul 26 16:16:49 2024 00:11:30.142 read: IOPS=998, BW=3992KiB/s (4088kB/s)(4108KiB/1029msec) 00:11:30.142 slat (nsec): min=5205, max=42319, avg=11771.29, stdev=5085.79 00:11:30.142 clat (usec): min=322, max=41058, avg=512.71, stdev=2194.21 00:11:30.142 lat (usec): min=331, max=41094, avg=524.48, stdev=2195.30 00:11:30.142 clat percentiles (usec): 00:11:30.142 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 355], 00:11:30.142 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 379], 00:11:30.142 | 70.00th=[ 396], 80.00th=[ 424], 90.00th=[ 478], 95.00th=[ 537], 00:11:30.142 | 99.00th=[ 652], 99.50th=[ 676], 99.90th=[41157], 99.95th=[41157], 00:11:30.142 | 99.99th=[41157] 00:11:30.142 write: IOPS=1492, BW=5971KiB/s (6114kB/s)(6144KiB/1029msec); 0 zone resets 00:11:30.142 slat (nsec): min=7154, max=72799, avg=19560.22, stdev=9514.70 00:11:30.142 clat (usec): min=219, max=3476, avg=291.57, stdev=92.89 00:11:30.142 lat (usec): min=228, max=3486, avg=311.13, stdev=94.30 00:11:30.142 clat percentiles (usec): 00:11:30.142 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 258], 00:11:30.142 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 289], 00:11:30.142 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 355], 95.00th=[ 375], 00:11:30.142 | 99.00th=[ 433], 99.50th=[ 449], 99.90th=[ 816], 99.95th=[ 3490], 00:11:30.142 | 99.99th=[ 3490] 00:11:30.142 bw ( KiB/s): min= 5328, max= 6960, per=38.59%, avg=6144.00, stdev=1154.00, samples=2 00:11:30.142 iops : min= 1332, max= 1740, avg=1536.00, stdev=288.50, samples=2 00:11:30.142 lat (usec) : 250=6.36%, 500=90.36%, 750=3.08%, 1000=0.04% 00:11:30.142 lat (msec) : 4=0.04%, 50=0.12% 00:11:30.142 cpu : usr=2.72%, sys=5.25%, ctx=2564, majf=0, minf=1 00:11:30.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.142 issued rwts: total=1027,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.142 job2: (groupid=0, jobs=1): err= 0: pid=581587: Fri Jul 26 16:16:49 2024 00:11:30.142 read: IOPS=910, BW=3640KiB/s (3728kB/s)(3644KiB/1001msec) 00:11:30.142 slat (nsec): min=5323, max=58306, avg=18624.35, stdev=13219.34 00:11:30.142 clat (usec): min=329, max=41542, avg=692.43, stdev=3296.02 00:11:30.142 lat (usec): min=339, max=41576, avg=711.05, stdev=3295.94 00:11:30.142 clat percentiles (usec): 00:11:30.142 | 1.00th=[ 343], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 367], 00:11:30.142 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 396], 60.00th=[ 416], 00:11:30.142 | 70.00th=[ 437], 80.00th=[ 461], 90.00th=[ 510], 95.00th=[ 537], 00:11:30.142 | 99.00th=[ 1090], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:30.142 | 99.99th=[41681] 00:11:30.142 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:30.142 slat (nsec): min=6947, max=78313, avg=18436.73, stdev=10263.82 00:11:30.142 clat (usec): min=227, max=642, avg=315.67, stdev=44.19 00:11:30.142 lat (usec): min=236, max=654, avg=334.11, stdev=48.48 00:11:30.142 clat percentiles (usec): 00:11:30.142 | 1.00th=[ 241], 5.00th=[ 253], 10.00th=[ 265], 20.00th=[ 281], 00:11:30.142 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 322], 00:11:30.142 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 379], 95.00th=[ 396], 00:11:30.142 | 99.00th=[ 424], 99.50th=[ 457], 99.90th=[ 594], 99.95th=[ 644], 00:11:30.142 | 99.99th=[ 644] 00:11:30.142 bw ( KiB/s): min= 6920, max= 6920, per=43.46%, avg=6920.00, stdev= 0.00, samples=1 00:11:30.142 iops : min= 1730, max= 1730, avg=1730.00, stdev= 0.00, samples=1 00:11:30.142 lat (usec) : 250=1.96%, 500=93.02%, 750=4.39%, 1000=0.05% 00:11:30.142 lat (msec) : 2=0.21%, 10=0.05%, 50=0.31% 00:11:30.143 cpu : usr=2.60%, sys=3.90%, ctx=1937, majf=0, minf=1 00:11:30.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.143 issued rwts: total=911,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.143 job3: (groupid=0, jobs=1): err= 0: pid=581588: Fri Jul 26 16:16:49 2024 00:11:30.143 read: IOPS=19, BW=77.9KiB/s (79.8kB/s)(80.0KiB/1027msec) 00:11:30.143 slat (nsec): min=8778, max=44917, avg=23487.35, stdev=11358.19 00:11:30.143 clat (usec): min=40940, max=42045, avg=41392.44, stdev=496.40 00:11:30.143 lat (usec): min=40976, max=42058, avg=41415.93, stdev=491.45 00:11:30.143 clat percentiles (usec): 00:11:30.143 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:30.143 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:30.143 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:30.143 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:30.143 | 99.99th=[42206] 00:11:30.143 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:11:30.143 slat (nsec): min=9286, max=72369, avg=22553.64, stdev=11885.79 00:11:30.143 clat (usec): min=241, max=1066, avg=358.64, stdev=81.18 00:11:30.143 lat (usec): min=251, max=1076, avg=381.20, stdev=83.23 00:11:30.143 clat percentiles (usec): 00:11:30.143 | 1.00th=[ 247], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 302], 00:11:30.143 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 363], 00:11:30.143 | 70.00th=[ 388], 80.00th=[ 408], 90.00th=[ 445], 95.00th=[ 494], 00:11:30.143 | 99.00th=[ 553], 99.50th=[ 865], 99.90th=[ 1074], 99.95th=[ 1074], 00:11:30.143 | 99.99th=[ 1074] 00:11:30.143 bw ( KiB/s): min= 4096, max= 4096, per=25.73%, avg=4096.00, stdev= 0.00, samples=1 00:11:30.143 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:30.143 lat (usec) : 250=1.13%, 500=91.35%, 750=3.20%, 1000=0.19% 00:11:30.143 lat (msec) : 2=0.38%, 50=3.76% 00:11:30.143 cpu : usr=0.88%, sys=1.27%, ctx=533, majf=0, minf=1 00:11:30.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.143 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.143 00:11:30.143 Run status group 0 (all jobs): 00:11:30.143 READ: bw=11.2MiB/s (11.7MB/s), 77.9KiB/s-3992KiB/s (79.8kB/s-4088kB/s), io=11.5MiB (12.0MB), run=1001-1029msec 00:11:30.143 WRITE: bw=15.5MiB/s (16.3MB/s), 1994KiB/s-5971KiB/s (2042kB/s-6114kB/s), io=16.0MiB (16.8MB), run=1001-1029msec 00:11:30.143 00:11:30.143 Disk stats (read/write): 00:11:30.143 nvme0n1: ios=1026/1024, merge=0/0, ticks=580/297, in_queue=877, util=85.57% 00:11:30.143 nvme0n2: ios=1080/1359, merge=0/0, ticks=480/381, in_queue=861, util=91.15% 00:11:30.143 nvme0n3: ios=644/1024, merge=0/0, ticks=1392/311, in_queue=1703, util=93.42% 00:11:30.143 nvme0n4: ios=72/512, merge=0/0, ticks=1326/165, in_queue=1491, util=94.11% 00:11:30.143 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:30.143 [global] 00:11:30.143 thread=1 00:11:30.143 invalidate=1 00:11:30.143 rw=randwrite 00:11:30.143 time_based=1 00:11:30.143 runtime=1 00:11:30.143 ioengine=libaio 00:11:30.143 direct=1 00:11:30.143 bs=4096 00:11:30.143 iodepth=1 00:11:30.143 norandommap=0 00:11:30.143 numjobs=1 00:11:30.143 00:11:30.143 verify_dump=1 00:11:30.143 verify_backlog=512 00:11:30.143 verify_state_save=0 00:11:30.143 do_verify=1 00:11:30.143 verify=crc32c-intel 00:11:30.143 [job0] 00:11:30.143 filename=/dev/nvme0n1 00:11:30.143 [job1] 00:11:30.143 filename=/dev/nvme0n2 00:11:30.143 [job2] 00:11:30.143 filename=/dev/nvme0n3 00:11:30.143 [job3] 00:11:30.143 filename=/dev/nvme0n4 00:11:30.408 Could not set queue depth (nvme0n1) 00:11:30.408 Could not set queue depth (nvme0n2) 00:11:30.408 Could not set queue depth (nvme0n3) 00:11:30.408 Could not set queue depth (nvme0n4) 00:11:30.408 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:30.408 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:30.408 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:30.408 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:30.408 fio-3.35 00:11:30.408 Starting 4 threads 00:11:31.791 00:11:31.791 job0: (groupid=0, jobs=1): err= 0: pid=581814: Fri Jul 26 16:16:51 2024 00:11:31.791 read: IOPS=515, BW=2062KiB/s (2111kB/s)(2064KiB/1001msec) 00:11:31.791 slat (nsec): min=5187, max=49194, avg=15960.71, stdev=7202.17 00:11:31.791 clat (usec): min=311, max=41538, avg=1285.83, stdev=5885.93 00:11:31.791 lat (usec): min=316, max=41570, avg=1301.79, stdev=5886.05 00:11:31.791 clat percentiles (usec): 00:11:31.791 | 1.00th=[ 330], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 375], 00:11:31.791 | 30.00th=[ 388], 40.00th=[ 392], 50.00th=[ 400], 60.00th=[ 404], 00:11:31.791 | 70.00th=[ 412], 80.00th=[ 433], 90.00th=[ 529], 95.00th=[ 676], 00:11:31.791 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:31.791 | 99.99th=[41681] 00:11:31.791 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:31.791 slat (nsec): min=7136, max=55609, avg=15269.50, stdev=7097.97 00:11:31.791 clat (usec): min=220, max=654, avg=298.44, stdev=55.03 00:11:31.791 lat (usec): min=236, max=663, avg=313.71, stdev=52.86 00:11:31.791 clat percentiles (usec): 00:11:31.791 | 1.00th=[ 227], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 260], 00:11:31.791 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:11:31.791 | 70.00th=[ 302], 80.00th=[ 334], 90.00th=[ 383], 95.00th=[ 408], 00:11:31.791 | 99.00th=[ 474], 99.50th=[ 510], 99.90th=[ 635], 99.95th=[ 652], 00:11:31.791 | 99.99th=[ 652] 00:11:31.791 bw ( KiB/s): min= 4096, max= 4096, per=34.67%, avg=4096.00, stdev= 0.00, samples=1 00:11:31.791 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:31.791 lat (usec) : 250=6.36%, 500=88.83%, 750=3.77%, 1000=0.32% 00:11:31.791 lat (msec) : 50=0.71% 00:11:31.791 cpu : usr=1.30%, sys=3.80%, ctx=1540, majf=0, minf=1 00:11:31.791 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.791 issued rwts: total=516,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.791 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.791 job1: (groupid=0, jobs=1): err= 0: pid=581815: Fri Jul 26 16:16:51 2024 00:11:31.791 read: IOPS=861, BW=3446KiB/s (3529kB/s)(3584KiB/1040msec) 00:11:31.791 slat (nsec): min=5744, max=79722, avg=13357.25, stdev=7361.35 00:11:31.791 clat (usec): min=300, max=41438, avg=770.35, stdev=4065.21 00:11:31.791 lat (usec): min=307, max=41455, avg=783.71, stdev=4066.63 00:11:31.791 clat percentiles (usec): 00:11:31.791 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:11:31.791 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 355], 00:11:31.791 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 570], 00:11:31.791 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:31.791 | 99.99th=[41681] 00:11:31.791 write: IOPS=984, BW=3938KiB/s (4033kB/s)(4096KiB/1040msec); 0 zone resets 00:11:31.791 slat (nsec): min=7773, max=68665, avg=19271.64, stdev=8968.90 00:11:31.791 clat (usec): min=220, max=580, avg=301.65, stdev=62.43 00:11:31.791 lat (usec): min=231, max=604, avg=320.92, stdev=62.02 00:11:31.791 clat percentiles (usec): 00:11:31.791 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 260], 00:11:31.791 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:11:31.791 | 70.00th=[ 306], 80.00th=[ 355], 90.00th=[ 400], 95.00th=[ 433], 00:11:31.791 | 99.00th=[ 486], 99.50th=[ 502], 99.90th=[ 578], 99.95th=[ 578], 00:11:31.791 | 99.99th=[ 578] 00:11:31.791 bw ( KiB/s): min= 2352, max= 5840, per=34.67%, avg=4096.00, stdev=2466.39, samples=2 00:11:31.791 iops : min= 588, max= 1460, avg=1024.00, stdev=616.60, samples=2 00:11:31.791 lat (usec) : 250=4.53%, 500=92.34%, 750=2.60%, 1000=0.05% 00:11:31.791 lat (msec) : 50=0.47% 00:11:31.791 cpu : usr=1.92%, sys=4.33%, ctx=1921, majf=0, minf=2 00:11:31.791 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.791 issued rwts: total=896,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.791 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.791 job2: (groupid=0, jobs=1): err= 0: pid=581816: Fri Jul 26 16:16:51 2024 00:11:31.791 read: IOPS=23, BW=95.2KiB/s (97.5kB/s)(96.0KiB/1008msec) 00:11:31.791 slat (nsec): min=7324, max=48836, avg=30032.46, stdev=10285.52 00:11:31.791 clat (usec): min=547, max=41365, avg=34283.55, stdev=15409.45 00:11:31.791 lat (usec): min=582, max=41398, avg=34313.58, stdev=15405.51 00:11:31.791 clat percentiles (usec): 00:11:31.791 | 1.00th=[ 545], 5.00th=[ 553], 10.00th=[ 562], 20.00th=[40633], 00:11:31.791 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:31.791 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:31.791 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:31.791 | 99.99th=[41157] 00:11:31.791 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:11:31.791 slat (nsec): min=7506, max=40040, avg=12757.70, stdev=5446.49 00:11:31.791 clat (usec): min=231, max=662, avg=341.98, stdev=82.00 00:11:31.791 lat (usec): min=239, max=671, avg=354.73, stdev=84.25 00:11:31.791 clat percentiles (usec): 00:11:31.791 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 265], 00:11:31.791 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 314], 60.00th=[ 363], 00:11:31.791 | 70.00th=[ 400], 80.00th=[ 420], 90.00th=[ 457], 95.00th=[ 478], 00:11:31.792 | 99.00th=[ 545], 99.50th=[ 586], 99.90th=[ 660], 99.95th=[ 660], 00:11:31.792 | 99.99th=[ 660] 00:11:31.792 bw ( KiB/s): min= 4096, max= 4096, per=34.67%, avg=4096.00, stdev= 0.00, samples=1 00:11:31.792 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:31.792 lat (usec) : 250=6.72%, 500=86.01%, 750=3.54% 00:11:31.792 lat (msec) : 50=3.73% 00:11:31.792 cpu : usr=0.40%, sys=0.89%, ctx=537, majf=0, minf=1 00:11:31.792 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.792 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.792 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.792 job3: (groupid=0, jobs=1): err= 0: pid=581817: Fri Jul 26 16:16:51 2024 00:11:31.792 read: IOPS=61, BW=246KiB/s (252kB/s)(252KiB/1024msec) 00:11:31.792 slat (nsec): min=6822, max=41298, avg=25690.37, stdev=9631.49 00:11:31.792 clat (usec): min=358, max=41988, avg=13924.50, stdev=19249.36 00:11:31.792 lat (usec): min=376, max=42023, avg=13950.19, stdev=19250.87 00:11:31.792 clat percentiles (usec): 00:11:31.792 | 1.00th=[ 359], 5.00th=[ 375], 10.00th=[ 392], 20.00th=[ 412], 00:11:31.792 | 30.00th=[ 420], 40.00th=[ 437], 50.00th=[ 494], 60.00th=[ 506], 00:11:31.792 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:11:31.792 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:31.792 | 99.99th=[42206] 00:11:31.792 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:11:31.792 slat (nsec): min=6302, max=34478, avg=9455.72, stdev=5050.11 00:11:31.792 clat (usec): min=224, max=506, avg=268.89, stdev=40.55 00:11:31.792 lat (usec): min=231, max=516, avg=278.34, stdev=41.29 00:11:31.792 clat percentiles (usec): 00:11:31.792 | 1.00th=[ 235], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 247], 00:11:31.792 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:11:31.792 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 363], 00:11:31.792 | 99.00th=[ 457], 99.50th=[ 474], 99.90th=[ 506], 99.95th=[ 506], 00:11:31.792 | 99.99th=[ 506] 00:11:31.792 bw ( KiB/s): min= 4096, max= 4096, per=34.67%, avg=4096.00, stdev= 0.00, samples=1 00:11:31.792 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:31.792 lat (usec) : 250=25.04%, 500=69.57%, 750=1.74% 00:11:31.792 lat (msec) : 50=3.65% 00:11:31.792 cpu : usr=0.00%, sys=0.98%, ctx=578, majf=0, minf=1 00:11:31.792 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.792 issued rwts: total=63,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.792 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.792 00:11:31.792 Run status group 0 (all jobs): 00:11:31.792 READ: bw=5765KiB/s (5904kB/s), 95.2KiB/s-3446KiB/s (97.5kB/s-3529kB/s), io=5996KiB (6140kB), run=1001-1040msec 00:11:31.792 WRITE: bw=11.5MiB/s (12.1MB/s), 2000KiB/s-4092KiB/s (2048kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1040msec 00:11:31.792 00:11:31.792 Disk stats (read/write): 00:11:31.792 nvme0n1: ios=562/673, merge=0/0, ticks=644/204, in_queue=848, util=87.78% 00:11:31.792 nvme0n2: ios=797/1024, merge=0/0, ticks=572/300, in_queue=872, util=95.53% 00:11:31.792 nvme0n3: ios=65/512, merge=0/0, ticks=774/170, in_queue=944, util=96.56% 00:11:31.792 nvme0n4: ios=82/512, merge=0/0, ticks=1615/138, in_queue=1753, util=98.42% 00:11:31.792 16:16:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:31.792 [global] 00:11:31.792 thread=1 00:11:31.792 invalidate=1 00:11:31.792 rw=write 00:11:31.792 time_based=1 00:11:31.792 runtime=1 00:11:31.792 ioengine=libaio 00:11:31.792 direct=1 00:11:31.792 bs=4096 00:11:31.792 iodepth=128 00:11:31.792 norandommap=0 00:11:31.792 numjobs=1 00:11:31.792 00:11:31.792 verify_dump=1 00:11:31.792 verify_backlog=512 00:11:31.792 verify_state_save=0 00:11:31.792 do_verify=1 00:11:31.792 verify=crc32c-intel 00:11:31.792 [job0] 00:11:31.792 filename=/dev/nvme0n1 00:11:31.792 [job1] 00:11:31.792 filename=/dev/nvme0n2 00:11:31.792 [job2] 00:11:31.792 filename=/dev/nvme0n3 00:11:31.792 [job3] 00:11:31.792 filename=/dev/nvme0n4 00:11:31.792 Could not set queue depth (nvme0n1) 00:11:31.792 Could not set queue depth (nvme0n2) 00:11:31.792 Could not set queue depth (nvme0n3) 00:11:31.792 Could not set queue depth (nvme0n4) 00:11:32.052 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:32.052 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:32.052 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:32.052 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:32.052 fio-3.35 00:11:32.052 Starting 4 threads 00:11:33.430 00:11:33.430 job0: (groupid=0, jobs=1): err= 0: pid=582089: Fri Jul 26 16:16:52 2024 00:11:33.430 read: IOPS=1732, BW=6928KiB/s (7094kB/s)(7032KiB/1015msec) 00:11:33.430 slat (usec): min=2, max=34141, avg=315.67, stdev=2058.83 00:11:33.430 clat (usec): min=721, max=98521, avg=41594.99, stdev=21180.19 00:11:33.430 lat (usec): min=14039, max=98532, avg=41910.66, stdev=21292.19 00:11:33.430 clat percentiles (usec): 00:11:33.430 | 1.00th=[15664], 5.00th=[17957], 10.00th=[20317], 20.00th=[22414], 00:11:33.430 | 30.00th=[23987], 40.00th=[29754], 50.00th=[38011], 60.00th=[41157], 00:11:33.430 | 70.00th=[50070], 80.00th=[62653], 90.00th=[76022], 95.00th=[85459], 00:11:33.430 | 99.00th=[98042], 99.50th=[98042], 99.90th=[98042], 99.95th=[98042], 00:11:33.430 | 99.99th=[98042] 00:11:33.430 write: IOPS=2017, BW=8071KiB/s (8265kB/s)(8192KiB/1015msec); 0 zone resets 00:11:33.430 slat (usec): min=3, max=37174, avg=214.26, stdev=1527.32 00:11:33.430 clat (usec): min=14376, max=57230, avg=26744.96, stdev=7465.84 00:11:33.430 lat (usec): min=14396, max=62847, avg=26959.22, stdev=7610.00 00:11:33.430 clat percentiles (usec): 00:11:33.430 | 1.00th=[16057], 5.00th=[17433], 10.00th=[18482], 20.00th=[20579], 00:11:33.430 | 30.00th=[22938], 40.00th=[23987], 50.00th=[25035], 60.00th=[26084], 00:11:33.430 | 70.00th=[28443], 80.00th=[32637], 90.00th=[39060], 95.00th=[43254], 00:11:33.430 | 99.00th=[45351], 99.50th=[45351], 99.90th=[49546], 99.95th=[56361], 00:11:33.430 | 99.99th=[57410] 00:11:33.431 bw ( KiB/s): min= 8192, max= 8192, per=16.64%, avg=8192.00, stdev= 0.00, samples=2 00:11:33.431 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:11:33.431 lat (usec) : 750=0.03% 00:11:33.431 lat (msec) : 20=14.92%, 50=71.26%, 100=13.79% 00:11:33.431 cpu : usr=3.45%, sys=2.86%, ctx=164, majf=0, minf=1 00:11:33.431 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:11:33.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.431 issued rwts: total=1758,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.431 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.431 job1: (groupid=0, jobs=1): err= 0: pid=582111: Fri Jul 26 16:16:52 2024 00:11:33.431 read: IOPS=4927, BW=19.2MiB/s (20.2MB/s)(19.4MiB/1006msec) 00:11:33.431 slat (usec): min=3, max=12386, avg=101.01, stdev=693.09 00:11:33.431 clat (usec): min=3943, max=26324, avg=13360.22, stdev=3524.57 00:11:33.431 lat (usec): min=4380, max=26333, avg=13461.24, stdev=3555.85 00:11:33.431 clat percentiles (usec): 00:11:33.431 | 1.00th=[ 7308], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[10814], 00:11:33.431 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12387], 60.00th=[13042], 00:11:33.431 | 70.00th=[13698], 80.00th=[15664], 90.00th=[18744], 95.00th=[21103], 00:11:33.431 | 99.00th=[23725], 99.50th=[24773], 99.90th=[26346], 99.95th=[26346], 00:11:33.431 | 99.99th=[26346] 00:11:33.431 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:11:33.431 slat (usec): min=4, max=10072, avg=82.57, stdev=389.59 00:11:33.431 clat (usec): min=1663, max=26247, avg=11991.11, stdev=2991.44 00:11:33.431 lat (usec): min=1697, max=26254, avg=12073.68, stdev=3001.71 00:11:33.431 clat percentiles (usec): 00:11:33.431 | 1.00th=[ 3916], 5.00th=[ 5997], 10.00th=[ 7635], 20.00th=[ 8848], 00:11:33.431 | 30.00th=[11731], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:11:33.431 | 70.00th=[13435], 80.00th=[13829], 90.00th=[15270], 95.00th=[16188], 00:11:33.431 | 99.00th=[16712], 99.50th=[16909], 99.90th=[24511], 99.95th=[25560], 00:11:33.431 | 99.99th=[26346] 00:11:33.431 bw ( KiB/s): min=20480, max=20480, per=41.61%, avg=20480.00, stdev= 0.00, samples=2 00:11:33.431 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:11:33.431 lat (msec) : 2=0.09%, 4=0.44%, 10=15.42%, 20=80.34%, 50=3.71% 00:11:33.431 cpu : usr=8.16%, sys=12.84%, ctx=563, majf=0, minf=1 00:11:33.431 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:33.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.431 issued rwts: total=4957,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.431 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.431 job2: (groupid=0, jobs=1): err= 0: pid=582147: Fri Jul 26 16:16:52 2024 00:11:33.431 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:11:33.431 slat (usec): min=2, max=17431, avg=154.35, stdev=1128.14 00:11:33.431 clat (usec): min=1495, max=78119, avg=20666.36, stdev=11444.91 00:11:33.431 lat (usec): min=1502, max=78745, avg=20820.71, stdev=11525.94 00:11:33.431 clat percentiles (usec): 00:11:33.431 | 1.00th=[ 5145], 5.00th=[ 6521], 10.00th=[ 8291], 20.00th=[14484], 00:11:33.431 | 30.00th=[14746], 40.00th=[16188], 50.00th=[18220], 60.00th=[19268], 00:11:33.431 | 70.00th=[22414], 80.00th=[26870], 90.00th=[34341], 95.00th=[45351], 00:11:33.431 | 99.00th=[64226], 99.50th=[66847], 99.90th=[66847], 99.95th=[77071], 00:11:33.431 | 99.99th=[78119] 00:11:33.431 write: IOPS=3682, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1003msec); 0 zone resets 00:11:33.431 slat (usec): min=3, max=13537, avg=102.76, stdev=681.50 00:11:33.431 clat (usec): min=959, max=119447, avg=17164.05, stdev=14473.53 00:11:33.431 lat (usec): min=982, max=119454, avg=17266.81, stdev=14517.26 00:11:33.431 clat percentiles (msec): 00:11:33.431 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 6], 20.00th=[ 10], 00:11:33.431 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 17], 00:11:33.431 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 26], 95.00th=[ 45], 00:11:33.431 | 99.00th=[ 100], 99.50th=[ 111], 99.90th=[ 116], 99.95th=[ 116], 00:11:33.431 | 99.99th=[ 120] 00:11:33.431 bw ( KiB/s): min=12288, max=16408, per=29.15%, avg=14348.00, stdev=2913.28, samples=2 00:11:33.431 iops : min= 3072, max= 4102, avg=3587.00, stdev=728.32, samples=2 00:11:33.431 lat (usec) : 1000=0.04% 00:11:33.431 lat (msec) : 2=0.50%, 4=3.02%, 10=13.64%, 20=56.68%, 50=23.09% 00:11:33.431 lat (msec) : 100=2.59%, 250=0.44% 00:11:33.431 cpu : usr=3.19%, sys=5.79%, ctx=351, majf=0, minf=1 00:11:33.431 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:33.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.431 issued rwts: total=3072,3694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.431 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.431 job3: (groupid=0, jobs=1): err= 0: pid=582164: Fri Jul 26 16:16:52 2024 00:11:33.431 read: IOPS=1514, BW=6059KiB/s (6205kB/s)(6144KiB/1014msec) 00:11:33.431 slat (usec): min=3, max=44844, avg=331.33, stdev=2396.81 00:11:33.431 clat (msec): min=15, max=115, avg=43.72, stdev=26.55 00:11:33.431 lat (msec): min=15, max=118, avg=44.05, stdev=26.69 00:11:33.431 clat percentiles (msec): 00:11:33.431 | 1.00th=[ 18], 5.00th=[ 20], 10.00th=[ 21], 20.00th=[ 21], 00:11:33.431 | 30.00th=[ 24], 40.00th=[ 27], 50.00th=[ 31], 60.00th=[ 39], 00:11:33.431 | 70.00th=[ 57], 80.00th=[ 69], 90.00th=[ 80], 95.00th=[ 106], 00:11:33.431 | 99.00th=[ 116], 99.50th=[ 116], 99.90th=[ 116], 99.95th=[ 116], 00:11:33.431 | 99.99th=[ 116] 00:11:33.431 write: IOPS=1604, BW=6418KiB/s (6572kB/s)(6508KiB/1014msec); 0 zone resets 00:11:33.431 slat (usec): min=4, max=23088, avg=297.29, stdev=1664.05 00:11:33.431 clat (usec): min=902, max=128520, avg=36633.77, stdev=19891.15 00:11:33.431 lat (msec): min=14, max=128, avg=36.93, stdev=20.01 00:11:33.431 clat percentiles (msec): 00:11:33.431 | 1.00th=[ 15], 5.00th=[ 15], 10.00th=[ 20], 20.00th=[ 22], 00:11:33.431 | 30.00th=[ 25], 40.00th=[ 29], 50.00th=[ 34], 60.00th=[ 36], 00:11:33.431 | 70.00th=[ 39], 80.00th=[ 44], 90.00th=[ 67], 95.00th=[ 75], 00:11:33.431 | 99.00th=[ 124], 99.50th=[ 128], 99.90th=[ 129], 99.95th=[ 129], 00:11:33.431 | 99.99th=[ 129] 00:11:33.431 bw ( KiB/s): min= 4288, max= 8000, per=12.48%, avg=6144.00, stdev=2624.78, samples=2 00:11:33.431 iops : min= 1072, max= 2000, avg=1536.00, stdev=656.20, samples=2 00:11:33.431 lat (usec) : 1000=0.03% 00:11:33.431 lat (msec) : 20=12.27%, 50=64.24%, 100=19.66%, 250=3.79% 00:11:33.431 cpu : usr=2.17%, sys=4.24%, ctx=150, majf=0, minf=1 00:11:33.431 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:11:33.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.431 issued rwts: total=1536,1627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.431 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.431 00:11:33.431 Run status group 0 (all jobs): 00:11:33.431 READ: bw=43.6MiB/s (45.7MB/s), 6059KiB/s-19.2MiB/s (6205kB/s-20.2MB/s), io=44.2MiB (46.4MB), run=1003-1015msec 00:11:33.431 WRITE: bw=48.1MiB/s (50.4MB/s), 6418KiB/s-19.9MiB/s (6572kB/s-20.8MB/s), io=48.8MiB (51.2MB), run=1003-1015msec 00:11:33.431 00:11:33.431 Disk stats (read/write): 00:11:33.431 nvme0n1: ios=1417/1536, merge=0/0, ticks=20917/14090, in_queue=35007, util=96.19% 00:11:33.431 nvme0n2: ios=4134/4375, merge=0/0, ticks=51434/47016, in_queue=98450, util=98.78% 00:11:33.431 nvme0n3: ios=2409/3072, merge=0/0, ticks=31797/36533, in_queue=68330, util=96.22% 00:11:33.431 nvme0n4: ios=1277/1536, merge=0/0, ticks=17562/27086, in_queue=44648, util=97.35% 00:11:33.431 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:33.431 [global] 00:11:33.431 thread=1 00:11:33.431 invalidate=1 00:11:33.431 rw=randwrite 00:11:33.431 time_based=1 00:11:33.431 runtime=1 00:11:33.431 ioengine=libaio 00:11:33.431 direct=1 00:11:33.431 bs=4096 00:11:33.431 iodepth=128 00:11:33.431 norandommap=0 00:11:33.431 numjobs=1 00:11:33.431 00:11:33.431 verify_dump=1 00:11:33.431 verify_backlog=512 00:11:33.431 verify_state_save=0 00:11:33.431 do_verify=1 00:11:33.431 verify=crc32c-intel 00:11:33.431 [job0] 00:11:33.431 filename=/dev/nvme0n1 00:11:33.431 [job1] 00:11:33.431 filename=/dev/nvme0n2 00:11:33.431 [job2] 00:11:33.431 filename=/dev/nvme0n3 00:11:33.431 [job3] 00:11:33.431 filename=/dev/nvme0n4 00:11:33.431 Could not set queue depth (nvme0n1) 00:11:33.431 Could not set queue depth (nvme0n2) 00:11:33.431 Could not set queue depth (nvme0n3) 00:11:33.431 Could not set queue depth (nvme0n4) 00:11:33.431 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:33.431 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:33.431 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:33.431 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:33.431 fio-3.35 00:11:33.431 Starting 4 threads 00:11:34.816 00:11:34.816 job0: (groupid=0, jobs=1): err= 0: pid=582400: Fri Jul 26 16:16:54 2024 00:11:34.816 read: IOPS=3163, BW=12.4MiB/s (13.0MB/s)(12.9MiB/1043msec) 00:11:34.816 slat (usec): min=3, max=23718, avg=144.12, stdev=1112.98 00:11:34.816 clat (usec): min=5355, max=68422, avg=19789.47, stdev=11700.48 00:11:34.816 lat (usec): min=5365, max=73097, avg=19933.59, stdev=11787.51 00:11:34.816 clat percentiles (usec): 00:11:34.816 | 1.00th=[ 9896], 5.00th=[10945], 10.00th=[11600], 20.00th=[12125], 00:11:34.816 | 30.00th=[13042], 40.00th=[13960], 50.00th=[16450], 60.00th=[16909], 00:11:34.816 | 70.00th=[18482], 80.00th=[24249], 90.00th=[35914], 95.00th=[52691], 00:11:34.816 | 99.00th=[63177], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:11:34.816 | 99.99th=[68682] 00:11:34.816 write: IOPS=3436, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1043msec); 0 zone resets 00:11:34.816 slat (usec): min=4, max=25226, avg=125.83, stdev=1133.00 00:11:34.816 clat (usec): min=575, max=76853, avg=18463.35, stdev=12460.21 00:11:34.816 lat (usec): min=1181, max=76890, avg=18589.18, stdev=12566.96 00:11:34.816 clat percentiles (usec): 00:11:34.816 | 1.00th=[ 2474], 5.00th=[ 5604], 10.00th=[ 7701], 20.00th=[ 8848], 00:11:34.816 | 30.00th=[10159], 40.00th=[12518], 50.00th=[14353], 60.00th=[16909], 00:11:34.816 | 70.00th=[21103], 80.00th=[26346], 90.00th=[35914], 95.00th=[51643], 00:11:34.816 | 99.00th=[52691], 99.50th=[54264], 99.90th=[61080], 99.95th=[73925], 00:11:34.816 | 99.99th=[77071] 00:11:34.816 bw ( KiB/s): min=12280, max=16392, per=31.24%, avg=14336.00, stdev=2907.62, samples=2 00:11:34.816 iops : min= 3070, max= 4098, avg=3584.00, stdev=726.91, samples=2 00:11:34.816 lat (usec) : 750=0.01% 00:11:34.816 lat (msec) : 2=0.36%, 4=0.52%, 10=15.63%, 20=55.67%, 50=22.52% 00:11:34.816 lat (msec) : 100=5.29% 00:11:34.816 cpu : usr=4.80%, sys=7.20%, ctx=248, majf=0, minf=9 00:11:34.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:34.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:34.816 issued rwts: total=3300,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.816 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:34.816 job1: (groupid=0, jobs=1): err= 0: pid=582401: Fri Jul 26 16:16:54 2024 00:11:34.816 read: IOPS=2916, BW=11.4MiB/s (11.9MB/s)(11.9MiB/1044msec) 00:11:34.816 slat (usec): min=3, max=16197, avg=157.84, stdev=1139.38 00:11:34.816 clat (usec): min=9232, max=75133, avg=21639.90, stdev=11467.99 00:11:34.816 lat (usec): min=9237, max=88286, avg=21797.74, stdev=11551.63 00:11:34.816 clat percentiles (usec): 00:11:34.816 | 1.00th=[ 9896], 5.00th=[11863], 10.00th=[13042], 20.00th=[13829], 00:11:34.816 | 30.00th=[14746], 40.00th=[16057], 50.00th=[17957], 60.00th=[19792], 00:11:34.816 | 70.00th=[22414], 80.00th=[30540], 90.00th=[33817], 95.00th=[38011], 00:11:34.816 | 99.00th=[74974], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:11:34.816 | 99.99th=[74974] 00:11:34.816 write: IOPS=2942, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1044msec); 0 zone resets 00:11:34.816 slat (usec): min=4, max=21275, avg=156.91, stdev=1075.81 00:11:34.816 clat (usec): min=2990, max=46787, avg=21045.17, stdev=7842.12 00:11:34.816 lat (usec): min=3031, max=46826, avg=21202.08, stdev=7926.68 00:11:34.816 clat percentiles (usec): 00:11:34.816 | 1.00th=[ 7832], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[12649], 00:11:34.816 | 30.00th=[14353], 40.00th=[19006], 50.00th=[23200], 60.00th=[25035], 00:11:34.816 | 70.00th=[25822], 80.00th=[27395], 90.00th=[30540], 95.00th=[33817], 00:11:34.816 | 99.00th=[39060], 99.50th=[39584], 99.90th=[42730], 99.95th=[43254], 00:11:34.816 | 99.99th=[46924] 00:11:34.816 bw ( KiB/s): min=12288, max=12288, per=26.78%, avg=12288.00, stdev= 0.00, samples=2 00:11:34.816 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:34.816 lat (msec) : 4=0.03%, 10=5.05%, 20=47.47%, 50=45.38%, 100=2.06% 00:11:34.816 cpu : usr=3.74%, sys=6.62%, ctx=214, majf=0, minf=13 00:11:34.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:34.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:34.816 issued rwts: total=3045,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.816 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:34.816 job2: (groupid=0, jobs=1): err= 0: pid=582406: Fri Jul 26 16:16:54 2024 00:11:34.816 read: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec) 00:11:34.816 slat (usec): min=2, max=41058, avg=296.47, stdev=2094.27 00:11:34.816 clat (msec): min=10, max=129, avg=38.26, stdev=26.50 00:11:34.816 lat (msec): min=14, max=129, avg=38.56, stdev=26.61 00:11:34.816 clat percentiles (msec): 00:11:34.816 | 1.00th=[ 15], 5.00th=[ 15], 10.00th=[ 17], 20.00th=[ 25], 00:11:34.816 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 32], 60.00th=[ 33], 00:11:34.816 | 70.00th=[ 34], 80.00th=[ 41], 90.00th=[ 82], 95.00th=[ 109], 00:11:34.816 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 130], 99.95th=[ 130], 00:11:34.816 | 99.99th=[ 130] 00:11:34.816 write: IOPS=1730, BW=6923KiB/s (7089kB/s)(6944KiB/1003msec); 0 zone resets 00:11:34.816 slat (usec): min=3, max=25992, avg=307.43, stdev=1964.81 00:11:34.816 clat (usec): min=939, max=101575, avg=37964.77, stdev=22674.05 00:11:34.816 lat (msec): min=7, max=101, avg=38.27, stdev=22.76 00:11:34.816 clat percentiles (msec): 00:11:34.816 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 20], 20.00th=[ 21], 00:11:34.816 | 30.00th=[ 23], 40.00th=[ 25], 50.00th=[ 29], 60.00th=[ 32], 00:11:34.816 | 70.00th=[ 51], 80.00th=[ 57], 90.00th=[ 71], 95.00th=[ 91], 00:11:34.816 | 99.00th=[ 102], 99.50th=[ 102], 99.90th=[ 102], 99.95th=[ 102], 00:11:34.816 | 99.99th=[ 102] 00:11:34.816 bw ( KiB/s): min= 6392, max= 6480, per=14.03%, avg=6436.00, stdev=62.23, samples=2 00:11:34.816 iops : min= 1598, max= 1620, avg=1609.00, stdev=15.56, samples=2 00:11:34.816 lat (usec) : 1000=0.03% 00:11:34.816 lat (msec) : 10=0.98%, 20=16.53%, 50=59.44%, 100=19.04%, 250=3.97% 00:11:34.816 cpu : usr=1.10%, sys=2.50%, ctx=124, majf=0, minf=15 00:11:34.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:11:34.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:34.817 issued rwts: total=1536,1736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.817 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:34.817 job3: (groupid=0, jobs=1): err= 0: pid=582407: Fri Jul 26 16:16:54 2024 00:11:34.817 read: IOPS=3550, BW=13.9MiB/s (14.5MB/s)(13.9MiB/1002msec) 00:11:34.817 slat (usec): min=2, max=9794, avg=117.43, stdev=690.42 00:11:34.817 clat (usec): min=1743, max=64534, avg=16697.28, stdev=7001.54 00:11:34.817 lat (usec): min=1751, max=64538, avg=16814.71, stdev=7016.06 00:11:34.817 clat percentiles (usec): 00:11:34.817 | 1.00th=[ 2868], 5.00th=[ 8455], 10.00th=[11338], 20.00th=[13304], 00:11:34.817 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15664], 60.00th=[16712], 00:11:34.817 | 70.00th=[17695], 80.00th=[18744], 90.00th=[23200], 95.00th=[26346], 00:11:34.817 | 99.00th=[55313], 99.50th=[55313], 99.90th=[62129], 99.95th=[62129], 00:11:34.817 | 99.99th=[64750] 00:11:34.817 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:11:34.817 slat (usec): min=3, max=19911, avg=146.93, stdev=1063.38 00:11:34.817 clat (usec): min=731, max=70627, avg=18627.18, stdev=10248.54 00:11:34.817 lat (usec): min=735, max=70671, avg=18774.11, stdev=10321.83 00:11:34.817 clat percentiles (usec): 00:11:34.817 | 1.00th=[ 4883], 5.00th=[ 7373], 10.00th=[ 9110], 20.00th=[10683], 00:11:34.817 | 30.00th=[12649], 40.00th=[14877], 50.00th=[15926], 60.00th=[19006], 00:11:34.817 | 70.00th=[21627], 80.00th=[22152], 90.00th=[29230], 95.00th=[42206], 00:11:34.817 | 99.00th=[54264], 99.50th=[54264], 99.90th=[54264], 99.95th=[69731], 00:11:34.817 | 99.99th=[70779] 00:11:34.817 bw ( KiB/s): min=12288, max=16384, per=31.24%, avg=14336.00, stdev=2896.31, samples=2 00:11:34.817 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:34.817 lat (usec) : 750=0.04%, 1000=0.07% 00:11:34.817 lat (msec) : 2=0.39%, 4=0.87%, 10=10.24%, 20=63.08%, 50=22.81% 00:11:34.817 lat (msec) : 100=2.51% 00:11:34.817 cpu : usr=3.20%, sys=5.09%, ctx=279, majf=0, minf=13 00:11:34.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:34.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:34.817 issued rwts: total=3558,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.817 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:34.817 00:11:34.817 Run status group 0 (all jobs): 00:11:34.817 READ: bw=42.8MiB/s (44.9MB/s), 6126KiB/s-13.9MiB/s (6273kB/s-14.5MB/s), io=44.7MiB (46.9MB), run=1002-1044msec 00:11:34.817 WRITE: bw=44.8MiB/s (47.0MB/s), 6923KiB/s-14.0MiB/s (7089kB/s-14.7MB/s), io=46.8MiB (49.1MB), run=1002-1044msec 00:11:34.817 00:11:34.817 Disk stats (read/write): 00:11:34.817 nvme0n1: ios=2603/3062, merge=0/0, ticks=40380/41123, in_queue=81503, util=99.10% 00:11:34.817 nvme0n2: ios=2372/2560, merge=0/0, ticks=27599/34442, in_queue=62041, util=99.19% 00:11:34.817 nvme0n3: ios=1377/1536, merge=0/0, ticks=11080/16079, in_queue=27159, util=88.81% 00:11:34.817 nvme0n4: ios=3086/3094, merge=0/0, ticks=24929/26056, in_queue=50985, util=99.05% 00:11:34.817 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:34.817 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=582539 00:11:34.817 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:34.817 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:34.817 [global] 00:11:34.817 thread=1 00:11:34.817 invalidate=1 00:11:34.817 rw=read 00:11:34.817 time_based=1 00:11:34.817 runtime=10 00:11:34.817 ioengine=libaio 00:11:34.817 direct=1 00:11:34.817 bs=4096 00:11:34.817 iodepth=1 00:11:34.817 norandommap=1 00:11:34.817 numjobs=1 00:11:34.817 00:11:34.817 [job0] 00:11:34.817 filename=/dev/nvme0n1 00:11:34.817 [job1] 00:11:34.817 filename=/dev/nvme0n2 00:11:34.817 [job2] 00:11:34.817 filename=/dev/nvme0n3 00:11:34.817 [job3] 00:11:34.817 filename=/dev/nvme0n4 00:11:34.817 Could not set queue depth (nvme0n1) 00:11:34.817 Could not set queue depth (nvme0n2) 00:11:34.817 Could not set queue depth (nvme0n3) 00:11:34.817 Could not set queue depth (nvme0n4) 00:11:34.817 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.817 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.817 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.817 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.817 fio-3.35 00:11:34.817 Starting 4 threads 00:11:38.105 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:38.105 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:38.105 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=1961984, buflen=4096 00:11:38.105 fio: pid=582640, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:38.105 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:38.105 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:38.106 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=26312704, buflen=4096 00:11:38.106 fio: pid=582639, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:38.365 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=28307456, buflen=4096 00:11:38.365 fio: pid=582637, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:38.623 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:38.623 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:38.882 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=23986176, buflen=4096 00:11:38.882 fio: pid=582638, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:38.882 00:11:38.882 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=582637: Fri Jul 26 16:16:58 2024 00:11:38.882 read: IOPS=2000, BW=8001KiB/s (8193kB/s)(27.0MiB/3455msec) 00:11:38.882 slat (usec): min=4, max=17898, avg=27.53, stdev=272.21 00:11:38.882 clat (usec): min=304, max=41491, avg=465.25, stdev=537.70 00:11:38.882 lat (usec): min=309, max=41505, avg=492.78, stdev=602.56 00:11:38.882 clat percentiles (usec): 00:11:38.882 | 1.00th=[ 326], 5.00th=[ 363], 10.00th=[ 388], 20.00th=[ 416], 00:11:38.882 | 30.00th=[ 437], 40.00th=[ 449], 50.00th=[ 457], 60.00th=[ 469], 00:11:38.882 | 70.00th=[ 482], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 545], 00:11:38.882 | 99.00th=[ 619], 99.50th=[ 660], 99.90th=[ 824], 99.95th=[ 873], 00:11:38.882 | 99.99th=[41681] 00:11:38.882 bw ( KiB/s): min= 7712, max= 8704, per=39.00%, avg=8077.33, stdev=377.36, samples=6 00:11:38.882 iops : min= 1928, max= 2176, avg=2019.33, stdev=94.34, samples=6 00:11:38.882 lat (usec) : 500=83.35%, 750=16.46%, 1000=0.14% 00:11:38.882 lat (msec) : 20=0.01%, 50=0.01% 00:11:38.882 cpu : usr=1.74%, sys=5.24%, ctx=6918, majf=0, minf=1 00:11:38.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.882 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.882 issued rwts: total=6912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.882 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=582638: Fri Jul 26 16:16:58 2024 00:11:38.882 read: IOPS=1541, BW=6166KiB/s (6314kB/s)(22.9MiB/3799msec) 00:11:38.882 slat (usec): min=4, max=11542, avg=31.57, stdev=278.68 00:11:38.882 clat (usec): min=327, max=41919, avg=611.20, stdev=2615.89 00:11:38.882 lat (usec): min=332, max=49605, avg=641.57, stdev=2650.75 00:11:38.882 clat percentiles (usec): 00:11:38.882 | 1.00th=[ 347], 5.00th=[ 363], 10.00th=[ 379], 20.00th=[ 404], 00:11:38.882 | 30.00th=[ 416], 40.00th=[ 424], 50.00th=[ 433], 60.00th=[ 445], 00:11:38.882 | 70.00th=[ 461], 80.00th=[ 469], 90.00th=[ 490], 95.00th=[ 510], 00:11:38.882 | 99.00th=[ 586], 99.50th=[ 758], 99.90th=[41157], 99.95th=[41157], 00:11:38.882 | 99.99th=[41681] 00:11:38.882 bw ( KiB/s): min= 2952, max= 9024, per=31.58%, avg=6540.86, stdev=2584.38, samples=7 00:11:38.882 iops : min= 738, max= 2256, avg=1635.14, stdev=646.14, samples=7 00:11:38.882 lat (usec) : 500=93.17%, 750=6.30%, 1000=0.03% 00:11:38.882 lat (msec) : 10=0.03%, 50=0.44% 00:11:38.882 cpu : usr=1.58%, sys=4.11%, ctx=5863, majf=0, minf=1 00:11:38.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.882 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.882 issued rwts: total=5857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.882 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=582639: Fri Jul 26 16:16:58 2024 00:11:38.882 read: IOPS=2013, BW=8053KiB/s (8246kB/s)(25.1MiB/3191msec) 00:11:38.882 slat (nsec): min=4758, max=72428, avg=22189.90, stdev=10393.47 00:11:38.882 clat (usec): min=322, max=3080, avg=465.54, stdev=65.04 00:11:38.882 lat (usec): min=329, max=3093, avg=487.73, stdev=66.93 00:11:38.882 clat percentiles (usec): 00:11:38.882 | 1.00th=[ 351], 5.00th=[ 375], 10.00th=[ 392], 20.00th=[ 420], 00:11:38.882 | 30.00th=[ 437], 40.00th=[ 453], 50.00th=[ 465], 60.00th=[ 482], 00:11:38.882 | 70.00th=[ 494], 80.00th=[ 506], 90.00th=[ 529], 95.00th=[ 553], 00:11:38.882 | 99.00th=[ 619], 99.50th=[ 644], 99.90th=[ 709], 99.95th=[ 824], 00:11:38.882 | 99.99th=[ 3097] 00:11:38.882 bw ( KiB/s): min= 7736, max= 8600, per=39.29%, avg=8138.67, stdev=335.42, samples=6 00:11:38.882 iops : min= 1934, max= 2150, avg=2034.67, stdev=83.85, samples=6 00:11:38.882 lat (usec) : 500=76.48%, 750=23.41%, 1000=0.06% 00:11:38.882 lat (msec) : 2=0.02%, 4=0.02% 00:11:38.882 cpu : usr=2.19%, sys=4.86%, ctx=6426, majf=0, minf=1 00:11:38.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.882 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.882 issued rwts: total=6425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.882 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=582640: Fri Jul 26 16:16:58 2024 00:11:38.882 read: IOPS=164, BW=656KiB/s (672kB/s)(1916KiB/2919msec) 00:11:38.882 slat (nsec): min=5877, max=58581, avg=18207.34, stdev=8596.68 00:11:38.882 clat (usec): min=371, max=41427, avg=6023.18, stdev=13867.37 00:11:38.882 lat (usec): min=383, max=41461, avg=6041.38, stdev=13869.84 00:11:38.882 clat percentiles (usec): 00:11:38.882 | 1.00th=[ 396], 5.00th=[ 429], 10.00th=[ 453], 20.00th=[ 478], 00:11:38.882 | 30.00th=[ 494], 40.00th=[ 515], 50.00th=[ 537], 60.00th=[ 562], 00:11:38.882 | 70.00th=[ 594], 80.00th=[ 652], 90.00th=[41157], 95.00th=[41157], 00:11:38.882 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:38.882 | 99.99th=[41681] 00:11:38.882 bw ( KiB/s): min= 96, max= 1904, per=3.62%, avg=750.40, stdev=901.06, samples=5 00:11:38.882 iops : min= 24, max= 476, avg=187.60, stdev=225.27, samples=5 00:11:38.882 lat (usec) : 500=34.58%, 750=49.58%, 1000=2.08% 00:11:38.882 lat (msec) : 50=13.54% 00:11:38.882 cpu : usr=0.21%, sys=0.34%, ctx=480, majf=0, minf=1 00:11:38.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.882 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.882 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.882 00:11:38.882 Run status group 0 (all jobs): 00:11:38.882 READ: bw=20.2MiB/s (21.2MB/s), 656KiB/s-8053KiB/s (672kB/s-8246kB/s), io=76.8MiB (80.6MB), run=2919-3799msec 00:11:38.882 00:11:38.882 Disk stats (read/write): 00:11:38.882 nvme0n1: ios=6781/0, merge=0/0, ticks=3240/0, in_queue=3240, util=98.88% 00:11:38.882 nvme0n2: ios=5892/0, merge=0/0, ticks=3543/0, in_queue=3543, util=98.98% 00:11:38.882 nvme0n3: ios=6287/0, merge=0/0, ticks=2803/0, in_queue=2803, util=96.79% 00:11:38.882 nvme0n4: ios=477/0, merge=0/0, ticks=2802/0, in_queue=2802, util=96.71% 00:11:38.882 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:38.882 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:39.140 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:39.140 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:39.399 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:39.399 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:39.657 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:39.657 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:40.225 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:40.225 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:40.485 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:40.485 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 582539 00:11:40.485 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:40.485 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.423 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.423 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:41.423 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:41.423 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.423 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:41.423 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.423 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:41.423 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:41.423 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:41.423 nvmf hotplug test: fio failed as expected 00:11:41.423 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.423 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:41.423 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:41.423 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:41.681 rmmod nvme_tcp 00:11:41.681 rmmod nvme_fabrics 00:11:41.681 rmmod nvme_keyring 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 580382 ']' 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 580382 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 580382 ']' 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 580382 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 580382 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 580382' 00:11:41.681 killing process with pid 580382 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 580382 00:11:41.681 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 580382 00:11:43.061 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:43.061 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:43.061 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:43.061 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:43.061 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:43.061 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.061 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.061 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.977 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:44.977 00:11:44.977 real 0m26.808s 00:11:44.977 user 1m30.797s 00:11:44.977 sys 0m7.967s 00:11:44.977 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.977 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.977 ************************************ 00:11:44.977 END TEST nvmf_fio_target 00:11:44.977 ************************************ 00:11:44.977 16:17:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:44.977 16:17:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:44.977 16:17:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:44.978 16:17:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:44.978 ************************************ 00:11:44.978 START TEST nvmf_bdevio 00:11:44.978 ************************************ 00:11:44.978 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:44.978 * Looking for test storage... 00:11:44.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.978 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.978 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:44.978 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.978 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.978 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.978 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.978 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.978 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.978 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.978 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.978 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.978 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:45.236 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:45.237 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:45.237 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:45.237 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:45.237 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.237 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:45.237 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:45.237 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:45.237 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.237 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.237 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.237 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:45.237 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:45.237 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:45.237 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:47.140 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:47.140 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.140 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:47.141 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:47.141 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:47.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:11:47.141 00:11:47.141 --- 10.0.0.2 ping statistics --- 00:11:47.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.141 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:11:47.141 00:11:47.141 --- 10.0.0.1 ping statistics --- 00:11:47.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.141 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:47.141 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:47.399 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:47.399 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:47.399 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:47.399 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:47.399 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=585528 00:11:47.399 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:47.399 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 585528 00:11:47.399 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 585528 ']' 00:11:47.399 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.399 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:47.399 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.399 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:47.399 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:47.399 [2024-07-26 16:17:07.013056] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:47.399 [2024-07-26 16:17:07.013213] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.399 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.399 [2024-07-26 16:17:07.153967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.966 [2024-07-26 16:17:07.421907] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.966 [2024-07-26 16:17:07.421994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.966 [2024-07-26 16:17:07.422022] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.966 [2024-07-26 16:17:07.422045] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.966 [2024-07-26 16:17:07.422078] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.966 [2024-07-26 16:17:07.422217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:47.966 [2024-07-26 16:17:07.422283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:47.966 [2024-07-26 16:17:07.422663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.966 [2024-07-26 16:17:07.422672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:48.224 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:48.224 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:48.224 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:48.224 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:48.224 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:48.224 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.224 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.224 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.224 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:48.224 [2024-07-26 16:17:07.972824] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.224 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.224 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:48.224 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.224 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:48.511 Malloc0 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:48.511 [2024-07-26 16:17:08.075988] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:48.511 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:48.512 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:48.512 { 00:11:48.512 "params": { 00:11:48.512 "name": "Nvme$subsystem", 00:11:48.512 "trtype": "$TEST_TRANSPORT", 00:11:48.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.512 "adrfam": "ipv4", 00:11:48.512 "trsvcid": "$NVMF_PORT", 00:11:48.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.512 "hdgst": ${hdgst:-false}, 00:11:48.512 "ddgst": ${ddgst:-false} 00:11:48.512 }, 00:11:48.512 "method": "bdev_nvme_attach_controller" 00:11:48.512 } 00:11:48.512 EOF 00:11:48.512 )") 00:11:48.512 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:48.512 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:48.512 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:48.512 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:48.512 "params": { 00:11:48.512 "name": "Nvme1", 00:11:48.512 "trtype": "tcp", 00:11:48.512 "traddr": "10.0.0.2", 00:11:48.512 "adrfam": "ipv4", 00:11:48.512 "trsvcid": "4420", 00:11:48.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.512 "hdgst": false, 00:11:48.512 "ddgst": false 00:11:48.512 }, 00:11:48.512 "method": "bdev_nvme_attach_controller" 00:11:48.512 }' 00:11:48.512 [2024-07-26 16:17:08.154322] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:48.512 [2024-07-26 16:17:08.154483] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid585685 ] 00:11:48.512 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.770 [2024-07-26 16:17:08.278735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:48.770 [2024-07-26 16:17:08.523750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.770 [2024-07-26 16:17:08.523792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.770 [2024-07-26 16:17:08.523803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.704 I/O targets: 00:11:49.704 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:49.704 00:11:49.704 00:11:49.704 CUnit - A unit testing framework for C - Version 2.1-3 00:11:49.704 http://cunit.sourceforge.net/ 00:11:49.704 00:11:49.704 00:11:49.704 Suite: bdevio tests on: Nvme1n1 00:11:49.704 Test: blockdev write read block ...passed 00:11:49.704 Test: blockdev write zeroes read block ...passed 00:11:49.704 Test: blockdev write zeroes read no split ...passed 00:11:49.704 Test: blockdev write zeroes read split ...passed 00:11:49.705 Test: blockdev write zeroes read split partial ...passed 00:11:49.705 Test: blockdev reset ...[2024-07-26 16:17:09.360077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:49.705 [2024-07-26 16:17:09.360261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:11:49.963 [2024-07-26 16:17:09.511113] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:49.963 passed 00:11:49.963 Test: blockdev write read 8 blocks ...passed 00:11:49.963 Test: blockdev write read size > 128k ...passed 00:11:49.963 Test: blockdev write read invalid size ...passed 00:11:49.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.963 Test: blockdev write read max offset ...passed 00:11:49.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.963 Test: blockdev writev readv 8 blocks ...passed 00:11:49.963 Test: blockdev writev readv 30 x 1block ...passed 00:11:50.222 Test: blockdev writev readv block ...passed 00:11:50.222 Test: blockdev writev readv size > 128k ...passed 00:11:50.222 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:50.222 Test: blockdev comparev and writev ...[2024-07-26 16:17:09.772618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.222 [2024-07-26 16:17:09.772689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:50.222 [2024-07-26 16:17:09.772728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.222 [2024-07-26 16:17:09.772757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:50.222 [2024-07-26 16:17:09.773298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.222 [2024-07-26 16:17:09.773332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:50.222 [2024-07-26 16:17:09.773365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.222 [2024-07-26 16:17:09.773390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:50.222 [2024-07-26 16:17:09.773904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.222 [2024-07-26 16:17:09.773937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:50.222 [2024-07-26 16:17:09.773970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.223 [2024-07-26 16:17:09.773994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:50.223 [2024-07-26 16:17:09.774499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.223 [2024-07-26 16:17:09.774532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:50.223 [2024-07-26 16:17:09.774566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.223 [2024-07-26 16:17:09.774600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:50.223 passed 00:11:50.223 Test: blockdev nvme passthru rw ...passed 00:11:50.223 Test: blockdev nvme passthru vendor specific ...[2024-07-26 16:17:09.857591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:50.223 [2024-07-26 16:17:09.857650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:50.223 [2024-07-26 16:17:09.857945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:50.223 [2024-07-26 16:17:09.857979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:50.223 [2024-07-26 16:17:09.858268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:50.223 [2024-07-26 16:17:09.858300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:50.223 [2024-07-26 16:17:09.858588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:50.223 [2024-07-26 16:17:09.858619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:50.223 passed 00:11:50.223 Test: blockdev nvme admin passthru ...passed 00:11:50.223 Test: blockdev copy ...passed 00:11:50.223 00:11:50.223 Run Summary: Type Total Ran Passed Failed Inactive 00:11:50.223 suites 1 1 n/a 0 0 00:11:50.223 tests 23 23 23 0 0 00:11:50.223 asserts 152 152 152 0 n/a 00:11:50.223 00:11:50.223 Elapsed time = 1.661 seconds 00:11:51.160 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.160 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.160 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:51.160 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.160 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:51.160 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:51.160 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:51.160 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:51.160 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:51.160 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:51.160 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:51.160 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:51.160 rmmod nvme_tcp 00:11:51.161 rmmod nvme_fabrics 00:11:51.419 rmmod nvme_keyring 00:11:51.419 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:51.419 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:51.419 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:51.419 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 585528 ']' 00:11:51.419 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 585528 00:11:51.419 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 585528 ']' 00:11:51.419 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 585528 00:11:51.419 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:51.419 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:51.419 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 585528 00:11:51.419 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:51.419 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:51.419 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 585528' 00:11:51.419 killing process with pid 585528 00:11:51.419 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 585528 00:11:51.419 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 585528 00:11:52.799 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:52.799 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:52.799 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:52.799 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:52.799 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:52.799 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.799 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.799 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.702 16:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:54.702 00:11:54.702 real 0m9.709s 00:11:54.702 user 0m24.412s 00:11:54.702 sys 0m2.415s 00:11:54.702 16:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.702 16:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.702 ************************************ 00:11:54.702 END TEST nvmf_bdevio 00:11:54.702 ************************************ 00:11:54.702 16:17:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:54.702 00:11:54.702 real 4m29.077s 00:11:54.702 user 11m32.211s 00:11:54.702 sys 1m13.712s 00:11:54.702 16:17:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.702 16:17:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:54.702 ************************************ 00:11:54.702 END TEST nvmf_target_core 00:11:54.702 ************************************ 00:11:54.702 16:17:14 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:54.702 16:17:14 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:54.703 16:17:14 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.703 16:17:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:54.703 ************************************ 00:11:54.703 START TEST nvmf_target_extra 00:11:54.703 ************************************ 00:11:54.703 16:17:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:54.961 * Looking for test storage... 00:11:54.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:54.961 ************************************ 00:11:54.961 START TEST nvmf_example 00:11:54.961 ************************************ 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:54.961 * Looking for test storage... 00:11:54.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.961 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:54.962 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:56.865 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.865 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:56.865 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:56.865 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:56.865 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:56.865 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:56.865 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:56.865 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:56.865 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:56.865 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:56.865 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:56.866 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:56.866 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:56.866 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:56.866 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:56.866 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:57.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:11:57.126 00:11:57.126 --- 10.0.0.2 ping statistics --- 00:11:57.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.126 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:11:57.126 00:11:57.126 --- 10.0.0.1 ping statistics --- 00:11:57.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.126 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=588079 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 588079 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 588079 ']' 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.126 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:57.126 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.063 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.063 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:58.063 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:58.063 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:58.063 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:58.063 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.063 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.063 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:58.063 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.063 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:58.063 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.063 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:58.322 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:58.322 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.545 Initializing NVMe Controllers 00:12:10.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:10.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:10.545 Initialization complete. Launching workers. 00:12:10.545 ======================================================== 00:12:10.545 Latency(us) 00:12:10.545 Device Information : IOPS MiB/s Average min max 00:12:10.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11699.19 45.70 5470.11 1271.11 15407.04 00:12:10.545 ======================================================== 00:12:10.545 Total : 11699.19 45.70 5470.11 1271.11 15407.04 00:12:10.545 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:10.545 rmmod nvme_tcp 00:12:10.545 rmmod nvme_fabrics 00:12:10.545 rmmod nvme_keyring 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 588079 ']' 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 588079 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 588079 ']' 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 588079 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 588079 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 588079' 00:12:10.545 killing process with pid 588079 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 588079 00:12:10.545 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 588079 00:12:10.545 nvmf threads initialize successfully 00:12:10.545 bdev subsystem init successfully 00:12:10.545 created a nvmf target service 00:12:10.545 create targets's poll groups done 00:12:10.545 all subsystems of target started 00:12:10.545 nvmf target is running 00:12:10.545 all subsystems of target stopped 00:12:10.545 destroy targets's poll groups done 00:12:10.545 destroyed the nvmf target service 00:12:10.545 bdev subsystem finish successfully 00:12:10.545 nvmf threads destroy successfully 00:12:10.545 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:10.545 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:10.545 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:10.545 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:10.545 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:10.545 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.545 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.545 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:11.926 00:12:11.926 real 0m17.036s 00:12:11.926 user 0m47.465s 00:12:11.926 sys 0m3.628s 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:11.926 ************************************ 00:12:11.926 END TEST nvmf_example 00:12:11.926 ************************************ 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:11.926 ************************************ 00:12:11.926 START TEST nvmf_filesystem 00:12:11.926 ************************************ 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:11.926 * Looking for test storage... 00:12:11.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:11.926 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:11.927 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:12.188 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:12.188 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:12.188 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:12.188 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:12.188 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:12.188 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:12.188 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:12.188 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:12.189 #define SPDK_CONFIG_H 00:12:12.189 #define SPDK_CONFIG_APPS 1 00:12:12.189 #define SPDK_CONFIG_ARCH native 00:12:12.189 #define SPDK_CONFIG_ASAN 1 00:12:12.189 #undef SPDK_CONFIG_AVAHI 00:12:12.189 #undef SPDK_CONFIG_CET 00:12:12.189 #define SPDK_CONFIG_COVERAGE 1 00:12:12.189 #define SPDK_CONFIG_CROSS_PREFIX 00:12:12.189 #undef SPDK_CONFIG_CRYPTO 00:12:12.189 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:12.189 #undef SPDK_CONFIG_CUSTOMOCF 00:12:12.189 #undef SPDK_CONFIG_DAOS 00:12:12.189 #define SPDK_CONFIG_DAOS_DIR 00:12:12.189 #define SPDK_CONFIG_DEBUG 1 00:12:12.189 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:12.189 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:12.189 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:12.189 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:12.189 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:12.189 #undef SPDK_CONFIG_DPDK_UADK 00:12:12.189 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:12.189 #define SPDK_CONFIG_EXAMPLES 1 00:12:12.189 #undef SPDK_CONFIG_FC 00:12:12.189 #define SPDK_CONFIG_FC_PATH 00:12:12.189 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:12.189 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:12.189 #undef SPDK_CONFIG_FUSE 00:12:12.189 #undef SPDK_CONFIG_FUZZER 00:12:12.189 #define SPDK_CONFIG_FUZZER_LIB 00:12:12.189 #undef SPDK_CONFIG_GOLANG 00:12:12.189 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:12.189 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:12.189 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:12.189 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:12.189 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:12.189 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:12.189 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:12.189 #define SPDK_CONFIG_IDXD 1 00:12:12.189 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:12.189 #undef SPDK_CONFIG_IPSEC_MB 00:12:12.189 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:12.189 #define SPDK_CONFIG_ISAL 1 00:12:12.189 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:12.189 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:12.189 #define SPDK_CONFIG_LIBDIR 00:12:12.189 #undef SPDK_CONFIG_LTO 00:12:12.189 #define SPDK_CONFIG_MAX_LCORES 128 00:12:12.189 #define SPDK_CONFIG_NVME_CUSE 1 00:12:12.189 #undef SPDK_CONFIG_OCF 00:12:12.189 #define SPDK_CONFIG_OCF_PATH 00:12:12.189 #define SPDK_CONFIG_OPENSSL_PATH 00:12:12.189 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:12.189 #define SPDK_CONFIG_PGO_DIR 00:12:12.189 #undef SPDK_CONFIG_PGO_USE 00:12:12.189 #define SPDK_CONFIG_PREFIX /usr/local 00:12:12.189 #undef SPDK_CONFIG_RAID5F 00:12:12.189 #undef SPDK_CONFIG_RBD 00:12:12.189 #define SPDK_CONFIG_RDMA 1 00:12:12.189 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:12.189 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:12.189 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:12.189 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:12.189 #define SPDK_CONFIG_SHARED 1 00:12:12.189 #undef SPDK_CONFIG_SMA 00:12:12.189 #define SPDK_CONFIG_TESTS 1 00:12:12.189 #undef SPDK_CONFIG_TSAN 00:12:12.189 #define SPDK_CONFIG_UBLK 1 00:12:12.189 #define SPDK_CONFIG_UBSAN 1 00:12:12.189 #undef SPDK_CONFIG_UNIT_TESTS 00:12:12.189 #undef SPDK_CONFIG_URING 00:12:12.189 #define SPDK_CONFIG_URING_PATH 00:12:12.189 #undef SPDK_CONFIG_URING_ZNS 00:12:12.189 #undef SPDK_CONFIG_USDT 00:12:12.189 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:12.189 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:12.189 #undef SPDK_CONFIG_VFIO_USER 00:12:12.189 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:12.189 #define SPDK_CONFIG_VHOST 1 00:12:12.189 #define SPDK_CONFIG_VIRTIO 1 00:12:12.189 #undef SPDK_CONFIG_VTUNE 00:12:12.189 #define SPDK_CONFIG_VTUNE_DIR 00:12:12.189 #define SPDK_CONFIG_WERROR 1 00:12:12.189 #define SPDK_CONFIG_WPDK_DIR 00:12:12.189 #undef SPDK_CONFIG_XNVME 00:12:12.189 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:12.189 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:12.190 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 590038 ]] 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 590038 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:12:12.191 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.VUy6iC 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.VUy6iC/tests/target /tmp/spdk.VUy6iC 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953643008 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330786816 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=55334019072 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994713088 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6660694016 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30986100736 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=11255808 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376530944 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398944256 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22413312 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30996348928 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=1007616 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199463936 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199468032 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:12:12.192 * Looking for test storage... 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=55334019072 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=8875286528 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:12.192 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:12.193 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:14.130 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:14.130 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:14.130 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:14.130 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:14.130 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:14.131 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.131 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.131 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.131 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.131 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:14.131 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.131 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.131 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.131 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:14.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:12:14.131 00:12:14.131 --- 10.0.0.2 ping statistics --- 00:12:14.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.131 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:12:14.131 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:12:14.131 00:12:14.131 --- 10.0.0.1 ping statistics --- 00:12:14.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.131 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.391 ************************************ 00:12:14.391 START TEST nvmf_filesystem_no_in_capsule 00:12:14.391 ************************************ 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=591664 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 591664 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 591664 ']' 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:14.391 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.391 [2024-07-26 16:17:34.032211] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:14.391 [2024-07-26 16:17:34.032362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.391 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.650 [2024-07-26 16:17:34.176822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.909 [2024-07-26 16:17:34.441906] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.909 [2024-07-26 16:17:34.441978] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.909 [2024-07-26 16:17:34.442005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.909 [2024-07-26 16:17:34.442026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.909 [2024-07-26 16:17:34.442047] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.909 [2024-07-26 16:17:34.442169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.909 [2024-07-26 16:17:34.442238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.909 [2024-07-26 16:17:34.446095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.909 [2024-07-26 16:17:34.446096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.476 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:15.476 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:15.476 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.476 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:15.476 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.476 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.476 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:15.476 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:15.476 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.476 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.476 [2024-07-26 16:17:35.034076] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.476 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.476 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:15.476 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.476 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 Malloc1 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 [2024-07-26 16:17:35.622787] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:16.044 { 00:12:16.045 "name": "Malloc1", 00:12:16.045 "aliases": [ 00:12:16.045 "974e5837-4f8c-41d9-948e-c3e6b656ecc9" 00:12:16.045 ], 00:12:16.045 "product_name": "Malloc disk", 00:12:16.045 "block_size": 512, 00:12:16.045 "num_blocks": 1048576, 00:12:16.045 "uuid": "974e5837-4f8c-41d9-948e-c3e6b656ecc9", 00:12:16.045 "assigned_rate_limits": { 00:12:16.045 "rw_ios_per_sec": 0, 00:12:16.045 "rw_mbytes_per_sec": 0, 00:12:16.045 "r_mbytes_per_sec": 0, 00:12:16.045 "w_mbytes_per_sec": 0 00:12:16.045 }, 00:12:16.045 "claimed": true, 00:12:16.045 "claim_type": "exclusive_write", 00:12:16.045 "zoned": false, 00:12:16.045 "supported_io_types": { 00:12:16.045 "read": true, 00:12:16.045 "write": true, 00:12:16.045 "unmap": true, 00:12:16.045 "flush": true, 00:12:16.045 "reset": true, 00:12:16.045 "nvme_admin": false, 00:12:16.045 "nvme_io": false, 00:12:16.045 "nvme_io_md": false, 00:12:16.045 "write_zeroes": true, 00:12:16.045 "zcopy": true, 00:12:16.045 "get_zone_info": false, 00:12:16.045 "zone_management": false, 00:12:16.045 "zone_append": false, 00:12:16.045 "compare": false, 00:12:16.045 "compare_and_write": false, 00:12:16.045 "abort": true, 00:12:16.045 "seek_hole": false, 00:12:16.045 "seek_data": false, 00:12:16.045 "copy": true, 00:12:16.045 "nvme_iov_md": false 00:12:16.045 }, 00:12:16.045 "memory_domains": [ 00:12:16.045 { 00:12:16.045 "dma_device_id": "system", 00:12:16.045 "dma_device_type": 1 00:12:16.045 }, 00:12:16.045 { 00:12:16.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.045 "dma_device_type": 2 00:12:16.045 } 00:12:16.045 ], 00:12:16.045 "driver_specific": {} 00:12:16.045 } 00:12:16.045 ]' 00:12:16.045 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:16.045 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:16.045 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:16.045 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:16.045 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:16.045 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:16.045 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:16.045 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.612 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:16.612 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:16.612 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.612 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:16.612 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:19.148 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:19.717 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.095 ************************************ 00:12:21.095 START TEST filesystem_ext4 00:12:21.095 ************************************ 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:21.095 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:21.095 mke2fs 1.46.5 (30-Dec-2021) 00:12:21.095 Discarding device blocks: 0/522240 done 00:12:21.095 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:21.095 Filesystem UUID: b87b3b04-5baa-4530-abb6-0b670e50f6b3 00:12:21.095 Superblock backups stored on blocks: 00:12:21.095 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:21.095 00:12:21.095 Allocating group tables: 0/64 done 00:12:21.095 Writing inode tables: 0/64 done 00:12:22.474 Creating journal (8192 blocks): done 00:12:22.474 Writing superblocks and filesystem accounting information: 0/64 done 00:12:22.474 00:12:22.474 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:22.474 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:22.474 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:22.474 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:22.474 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:22.474 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:22.474 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:22.474 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:22.474 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 591664 00:12:22.474 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:22.474 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:22.475 00:12:22.475 real 0m1.607s 00:12:22.475 user 0m0.016s 00:12:22.475 sys 0m0.056s 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:22.475 ************************************ 00:12:22.475 END TEST filesystem_ext4 00:12:22.475 ************************************ 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.475 ************************************ 00:12:22.475 START TEST filesystem_btrfs 00:12:22.475 ************************************ 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:22.475 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:22.736 btrfs-progs v6.6.2 00:12:22.736 See https://btrfs.readthedocs.io for more information. 00:12:22.736 00:12:22.736 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:22.736 NOTE: several default settings have changed in version 5.15, please make sure 00:12:22.736 this does not affect your deployments: 00:12:22.736 - DUP for metadata (-m dup) 00:12:22.736 - enabled no-holes (-O no-holes) 00:12:22.736 - enabled free-space-tree (-R free-space-tree) 00:12:22.736 00:12:22.736 Label: (null) 00:12:22.736 UUID: 41f61493-2ac5-4389-b94f-1d5ca4dd4596 00:12:22.736 Node size: 16384 00:12:22.736 Sector size: 4096 00:12:22.736 Filesystem size: 510.00MiB 00:12:22.736 Block group profiles: 00:12:22.736 Data: single 8.00MiB 00:12:22.736 Metadata: DUP 32.00MiB 00:12:22.736 System: DUP 8.00MiB 00:12:22.736 SSD detected: yes 00:12:22.736 Zoned device: no 00:12:22.736 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:22.736 Runtime features: free-space-tree 00:12:22.736 Checksum: crc32c 00:12:22.736 Number of devices: 1 00:12:22.736 Devices: 00:12:22.736 ID SIZE PATH 00:12:22.736 1 510.00MiB /dev/nvme0n1p1 00:12:22.736 00:12:22.736 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:22.736 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 591664 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:22.996 00:12:22.996 real 0m0.537s 00:12:22.996 user 0m0.026s 00:12:22.996 sys 0m0.113s 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:22.996 ************************************ 00:12:22.996 END TEST filesystem_btrfs 00:12:22.996 ************************************ 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.996 ************************************ 00:12:22.996 START TEST filesystem_xfs 00:12:22.996 ************************************ 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:22.996 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:23.255 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:23.255 = sectsz=512 attr=2, projid32bit=1 00:12:23.255 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:23.255 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:23.255 data = bsize=4096 blocks=130560, imaxpct=25 00:12:23.255 = sunit=0 swidth=0 blks 00:12:23.255 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:23.255 log =internal log bsize=4096 blocks=16384, version=2 00:12:23.255 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:23.255 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:24.192 Discarding blocks...Done. 00:12:24.192 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:24.192 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:26.094 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:26.094 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:26.094 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:26.094 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:26.094 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:26.094 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:26.094 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 591664 00:12:26.094 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:26.094 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:26.094 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:26.094 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:26.094 00:12:26.094 real 0m2.978s 00:12:26.094 user 0m0.012s 00:12:26.094 sys 0m0.059s 00:12:26.094 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.094 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:26.094 ************************************ 00:12:26.094 END TEST filesystem_xfs 00:12:26.094 ************************************ 00:12:26.094 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:26.095 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 591664 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 591664 ']' 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 591664 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:26.663 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 591664 00:12:26.664 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:26.664 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:26.664 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 591664' 00:12:26.664 killing process with pid 591664 00:12:26.664 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 591664 00:12:26.664 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 591664 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:29.195 00:12:29.195 real 0m14.897s 00:12:29.195 user 0m55.196s 00:12:29.195 sys 0m2.020s 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.195 ************************************ 00:12:29.195 END TEST nvmf_filesystem_no_in_capsule 00:12:29.195 ************************************ 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.195 ************************************ 00:12:29.195 START TEST nvmf_filesystem_in_capsule 00:12:29.195 ************************************ 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=593610 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 593610 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 593610 ']' 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.195 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.455 [2024-07-26 16:17:48.985391] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:29.455 [2024-07-26 16:17:48.985521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.455 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.455 [2024-07-26 16:17:49.127405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.715 [2024-07-26 16:17:49.389480] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.715 [2024-07-26 16:17:49.389555] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.715 [2024-07-26 16:17:49.389585] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.715 [2024-07-26 16:17:49.389608] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.715 [2024-07-26 16:17:49.389630] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.715 [2024-07-26 16:17:49.389760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.715 [2024-07-26 16:17:49.389831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.715 [2024-07-26 16:17:49.389912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.715 [2024-07-26 16:17:49.389922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.283 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:30.283 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:30.283 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:30.283 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:30.283 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.283 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.283 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:30.283 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:30.283 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.283 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.283 [2024-07-26 16:17:49.956150] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.283 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.283 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:30.283 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.283 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.850 Malloc1 00:12:30.850 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.850 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:30.850 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.850 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.850 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.850 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.851 [2024-07-26 16:17:50.543917] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:30.851 { 00:12:30.851 "name": "Malloc1", 00:12:30.851 "aliases": [ 00:12:30.851 "448e4109-a968-41f0-a396-f9cb059773fc" 00:12:30.851 ], 00:12:30.851 "product_name": "Malloc disk", 00:12:30.851 "block_size": 512, 00:12:30.851 "num_blocks": 1048576, 00:12:30.851 "uuid": "448e4109-a968-41f0-a396-f9cb059773fc", 00:12:30.851 "assigned_rate_limits": { 00:12:30.851 "rw_ios_per_sec": 0, 00:12:30.851 "rw_mbytes_per_sec": 0, 00:12:30.851 "r_mbytes_per_sec": 0, 00:12:30.851 "w_mbytes_per_sec": 0 00:12:30.851 }, 00:12:30.851 "claimed": true, 00:12:30.851 "claim_type": "exclusive_write", 00:12:30.851 "zoned": false, 00:12:30.851 "supported_io_types": { 00:12:30.851 "read": true, 00:12:30.851 "write": true, 00:12:30.851 "unmap": true, 00:12:30.851 "flush": true, 00:12:30.851 "reset": true, 00:12:30.851 "nvme_admin": false, 00:12:30.851 "nvme_io": false, 00:12:30.851 "nvme_io_md": false, 00:12:30.851 "write_zeroes": true, 00:12:30.851 "zcopy": true, 00:12:30.851 "get_zone_info": false, 00:12:30.851 "zone_management": false, 00:12:30.851 "zone_append": false, 00:12:30.851 "compare": false, 00:12:30.851 "compare_and_write": false, 00:12:30.851 "abort": true, 00:12:30.851 "seek_hole": false, 00:12:30.851 "seek_data": false, 00:12:30.851 "copy": true, 00:12:30.851 "nvme_iov_md": false 00:12:30.851 }, 00:12:30.851 "memory_domains": [ 00:12:30.851 { 00:12:30.851 "dma_device_id": "system", 00:12:30.851 "dma_device_type": 1 00:12:30.851 }, 00:12:30.851 { 00:12:30.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.851 "dma_device_type": 2 00:12:30.851 } 00:12:30.851 ], 00:12:30.851 "driver_specific": {} 00:12:30.851 } 00:12:30.851 ]' 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:30.851 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:31.110 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:31.110 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:31.110 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:31.110 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:31.110 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.709 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.709 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:31.709 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.709 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:31.709 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:33.612 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:33.871 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:34.438 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:35.373 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:35.373 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:35.373 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:35.373 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:35.373 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.632 ************************************ 00:12:35.632 START TEST filesystem_in_capsule_ext4 00:12:35.632 ************************************ 00:12:35.632 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:35.632 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:35.632 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:35.632 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:35.632 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:35.632 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:35.632 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:35.632 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:35.633 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:35.633 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:35.633 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:35.633 mke2fs 1.46.5 (30-Dec-2021) 00:12:35.633 Discarding device blocks: 0/522240 done 00:12:35.633 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:35.633 Filesystem UUID: fd6f6f14-794d-49a8-b488-f83abea6e0cf 00:12:35.633 Superblock backups stored on blocks: 00:12:35.633 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:35.633 00:12:35.633 Allocating group tables: 0/64 done 00:12:35.633 Writing inode tables: 0/64 done 00:12:36.201 Creating journal (8192 blocks): done 00:12:37.030 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:12:37.030 00:12:37.030 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:37.030 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:37.030 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:37.030 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:37.030 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:37.030 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:37.030 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:37.030 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:37.030 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 593610 00:12:37.030 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:37.030 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:37.288 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:37.288 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:37.288 00:12:37.288 real 0m1.646s 00:12:37.288 user 0m0.018s 00:12:37.288 sys 0m0.058s 00:12:37.288 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.288 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:37.288 ************************************ 00:12:37.288 END TEST filesystem_in_capsule_ext4 00:12:37.288 ************************************ 00:12:37.288 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:37.288 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:37.288 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.288 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.288 ************************************ 00:12:37.289 START TEST filesystem_in_capsule_btrfs 00:12:37.289 ************************************ 00:12:37.289 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:37.289 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:37.289 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:37.289 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:37.289 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:37.289 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:37.289 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:37.289 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:37.289 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:37.289 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:37.289 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:37.289 btrfs-progs v6.6.2 00:12:37.289 See https://btrfs.readthedocs.io for more information. 00:12:37.289 00:12:37.289 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:37.289 NOTE: several default settings have changed in version 5.15, please make sure 00:12:37.289 this does not affect your deployments: 00:12:37.289 - DUP for metadata (-m dup) 00:12:37.289 - enabled no-holes (-O no-holes) 00:12:37.289 - enabled free-space-tree (-R free-space-tree) 00:12:37.289 00:12:37.289 Label: (null) 00:12:37.289 UUID: ec3acc84-f898-461b-8f24-923d1b471257 00:12:37.289 Node size: 16384 00:12:37.289 Sector size: 4096 00:12:37.289 Filesystem size: 510.00MiB 00:12:37.289 Block group profiles: 00:12:37.289 Data: single 8.00MiB 00:12:37.289 Metadata: DUP 32.00MiB 00:12:37.289 System: DUP 8.00MiB 00:12:37.289 SSD detected: yes 00:12:37.289 Zoned device: no 00:12:37.289 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:37.289 Runtime features: free-space-tree 00:12:37.289 Checksum: crc32c 00:12:37.289 Number of devices: 1 00:12:37.289 Devices: 00:12:37.289 ID SIZE PATH 00:12:37.289 1 510.00MiB /dev/nvme0n1p1 00:12:37.289 00:12:37.289 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:37.289 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 593610 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:37.858 00:12:37.858 real 0m0.712s 00:12:37.858 user 0m0.015s 00:12:37.858 sys 0m0.124s 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:37.858 ************************************ 00:12:37.858 END TEST filesystem_in_capsule_btrfs 00:12:37.858 ************************************ 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.858 ************************************ 00:12:37.858 START TEST filesystem_in_capsule_xfs 00:12:37.858 ************************************ 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:37.858 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:38.117 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:38.117 = sectsz=512 attr=2, projid32bit=1 00:12:38.117 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:38.117 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:38.117 data = bsize=4096 blocks=130560, imaxpct=25 00:12:38.117 = sunit=0 swidth=0 blks 00:12:38.117 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:38.117 log =internal log bsize=4096 blocks=16384, version=2 00:12:38.117 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:38.117 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:39.053 Discarding blocks...Done. 00:12:39.053 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:39.053 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:41.584 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 593610 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:41.584 00:12:41.584 real 0m3.466s 00:12:41.584 user 0m0.016s 00:12:41.584 sys 0m0.056s 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:41.584 ************************************ 00:12:41.584 END TEST filesystem_in_capsule_xfs 00:12:41.584 ************************************ 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 593610 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 593610 ']' 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 593610 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:41.584 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 593610 00:12:41.842 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:41.842 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:41.842 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 593610' 00:12:41.842 killing process with pid 593610 00:12:41.842 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 593610 00:12:41.842 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 593610 00:12:44.375 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:44.375 00:12:44.375 real 0m15.058s 00:12:44.375 user 0m55.412s 00:12:44.375 sys 0m2.237s 00:12:44.375 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.375 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.375 ************************************ 00:12:44.375 END TEST nvmf_filesystem_in_capsule 00:12:44.375 ************************************ 00:12:44.375 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:44.375 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:44.375 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:44.375 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:44.375 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:44.375 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:44.375 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:44.375 rmmod nvme_tcp 00:12:44.375 rmmod nvme_fabrics 00:12:44.375 rmmod nvme_keyring 00:12:44.375 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.375 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:44.375 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:44.375 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:44.375 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:44.375 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:44.375 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:44.375 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.375 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:44.375 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.375 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.375 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:46.914 00:12:46.914 real 0m34.458s 00:12:46.914 user 1m51.454s 00:12:46.914 sys 0m5.902s 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:46.914 ************************************ 00:12:46.914 END TEST nvmf_filesystem 00:12:46.914 ************************************ 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:46.914 ************************************ 00:12:46.914 START TEST nvmf_target_discovery 00:12:46.914 ************************************ 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:46.914 * Looking for test storage... 00:12:46.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:46.914 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:46.915 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.915 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.915 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.915 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:46.915 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:46.915 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:46.915 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:48.817 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:48.817 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:48.817 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.817 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:48.818 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:48.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:12:48.818 00:12:48.818 --- 10.0.0.2 ping statistics --- 00:12:48.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.818 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:12:48.818 00:12:48.818 --- 10.0.0.1 ping statistics --- 00:12:48.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.818 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=598000 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 598000 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 598000 ']' 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:48.818 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.818 [2024-07-26 16:18:08.460381] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:48.818 [2024-07-26 16:18:08.460526] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.818 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.077 [2024-07-26 16:18:08.596465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.335 [2024-07-26 16:18:08.855640] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.335 [2024-07-26 16:18:08.855713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.335 [2024-07-26 16:18:08.855746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.335 [2024-07-26 16:18:08.855767] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.335 [2024-07-26 16:18:08.855789] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.335 [2024-07-26 16:18:08.855920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.335 [2024-07-26 16:18:08.855987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.335 [2024-07-26 16:18:08.856113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.335 [2024-07-26 16:18:08.856121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.948 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:49.948 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:49.948 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:49.948 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:49.948 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.948 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.948 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.948 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.948 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.948 [2024-07-26 16:18:09.399282] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.948 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.948 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:49.948 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 Null1 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 [2024-07-26 16:18:09.440746] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 Null2 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 Null3 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 Null4 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.949 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:50.210 00:12:50.210 Discovery Log Number of Records 6, Generation counter 6 00:12:50.210 =====Discovery Log Entry 0====== 00:12:50.210 trtype: tcp 00:12:50.210 adrfam: ipv4 00:12:50.210 subtype: current discovery subsystem 00:12:50.210 treq: not required 00:12:50.210 portid: 0 00:12:50.210 trsvcid: 4420 00:12:50.210 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:50.210 traddr: 10.0.0.2 00:12:50.210 eflags: explicit discovery connections, duplicate discovery information 00:12:50.211 sectype: none 00:12:50.211 =====Discovery Log Entry 1====== 00:12:50.211 trtype: tcp 00:12:50.211 adrfam: ipv4 00:12:50.211 subtype: nvme subsystem 00:12:50.211 treq: not required 00:12:50.211 portid: 0 00:12:50.211 trsvcid: 4420 00:12:50.211 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:50.211 traddr: 10.0.0.2 00:12:50.211 eflags: none 00:12:50.211 sectype: none 00:12:50.211 =====Discovery Log Entry 2====== 00:12:50.211 trtype: tcp 00:12:50.211 adrfam: ipv4 00:12:50.211 subtype: nvme subsystem 00:12:50.211 treq: not required 00:12:50.211 portid: 0 00:12:50.211 trsvcid: 4420 00:12:50.211 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:50.211 traddr: 10.0.0.2 00:12:50.211 eflags: none 00:12:50.211 sectype: none 00:12:50.211 =====Discovery Log Entry 3====== 00:12:50.211 trtype: tcp 00:12:50.211 adrfam: ipv4 00:12:50.211 subtype: nvme subsystem 00:12:50.211 treq: not required 00:12:50.211 portid: 0 00:12:50.211 trsvcid: 4420 00:12:50.211 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:50.211 traddr: 10.0.0.2 00:12:50.211 eflags: none 00:12:50.211 sectype: none 00:12:50.211 =====Discovery Log Entry 4====== 00:12:50.211 trtype: tcp 00:12:50.211 adrfam: ipv4 00:12:50.211 subtype: nvme subsystem 00:12:50.211 treq: not required 00:12:50.211 portid: 0 00:12:50.211 trsvcid: 4420 00:12:50.211 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:50.211 traddr: 10.0.0.2 00:12:50.211 eflags: none 00:12:50.211 sectype: none 00:12:50.211 =====Discovery Log Entry 5====== 00:12:50.211 trtype: tcp 00:12:50.211 adrfam: ipv4 00:12:50.211 subtype: discovery subsystem referral 00:12:50.211 treq: not required 00:12:50.211 portid: 0 00:12:50.211 trsvcid: 4430 00:12:50.211 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:50.211 traddr: 10.0.0.2 00:12:50.211 eflags: none 00:12:50.211 sectype: none 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:50.211 Perform nvmf subsystem discovery via RPC 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.211 [ 00:12:50.211 { 00:12:50.211 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:50.211 "subtype": "Discovery", 00:12:50.211 "listen_addresses": [ 00:12:50.211 { 00:12:50.211 "trtype": "TCP", 00:12:50.211 "adrfam": "IPv4", 00:12:50.211 "traddr": "10.0.0.2", 00:12:50.211 "trsvcid": "4420" 00:12:50.211 } 00:12:50.211 ], 00:12:50.211 "allow_any_host": true, 00:12:50.211 "hosts": [] 00:12:50.211 }, 00:12:50.211 { 00:12:50.211 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:50.211 "subtype": "NVMe", 00:12:50.211 "listen_addresses": [ 00:12:50.211 { 00:12:50.211 "trtype": "TCP", 00:12:50.211 "adrfam": "IPv4", 00:12:50.211 "traddr": "10.0.0.2", 00:12:50.211 "trsvcid": "4420" 00:12:50.211 } 00:12:50.211 ], 00:12:50.211 "allow_any_host": true, 00:12:50.211 "hosts": [], 00:12:50.211 "serial_number": "SPDK00000000000001", 00:12:50.211 "model_number": "SPDK bdev Controller", 00:12:50.211 "max_namespaces": 32, 00:12:50.211 "min_cntlid": 1, 00:12:50.211 "max_cntlid": 65519, 00:12:50.211 "namespaces": [ 00:12:50.211 { 00:12:50.211 "nsid": 1, 00:12:50.211 "bdev_name": "Null1", 00:12:50.211 "name": "Null1", 00:12:50.211 "nguid": "5FEA4354BCB44D318C82336F1D00967B", 00:12:50.211 "uuid": "5fea4354-bcb4-4d31-8c82-336f1d00967b" 00:12:50.211 } 00:12:50.211 ] 00:12:50.211 }, 00:12:50.211 { 00:12:50.211 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:50.211 "subtype": "NVMe", 00:12:50.211 "listen_addresses": [ 00:12:50.211 { 00:12:50.211 "trtype": "TCP", 00:12:50.211 "adrfam": "IPv4", 00:12:50.211 "traddr": "10.0.0.2", 00:12:50.211 "trsvcid": "4420" 00:12:50.211 } 00:12:50.211 ], 00:12:50.211 "allow_any_host": true, 00:12:50.211 "hosts": [], 00:12:50.211 "serial_number": "SPDK00000000000002", 00:12:50.211 "model_number": "SPDK bdev Controller", 00:12:50.211 "max_namespaces": 32, 00:12:50.211 "min_cntlid": 1, 00:12:50.211 "max_cntlid": 65519, 00:12:50.211 "namespaces": [ 00:12:50.211 { 00:12:50.211 "nsid": 1, 00:12:50.211 "bdev_name": "Null2", 00:12:50.211 "name": "Null2", 00:12:50.211 "nguid": "3866D0F9C81D42B2827F4EB411F913CD", 00:12:50.211 "uuid": "3866d0f9-c81d-42b2-827f-4eb411f913cd" 00:12:50.211 } 00:12:50.211 ] 00:12:50.211 }, 00:12:50.211 { 00:12:50.211 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:50.211 "subtype": "NVMe", 00:12:50.211 "listen_addresses": [ 00:12:50.211 { 00:12:50.211 "trtype": "TCP", 00:12:50.211 "adrfam": "IPv4", 00:12:50.211 "traddr": "10.0.0.2", 00:12:50.211 "trsvcid": "4420" 00:12:50.211 } 00:12:50.211 ], 00:12:50.211 "allow_any_host": true, 00:12:50.211 "hosts": [], 00:12:50.211 "serial_number": "SPDK00000000000003", 00:12:50.211 "model_number": "SPDK bdev Controller", 00:12:50.211 "max_namespaces": 32, 00:12:50.211 "min_cntlid": 1, 00:12:50.211 "max_cntlid": 65519, 00:12:50.211 "namespaces": [ 00:12:50.211 { 00:12:50.211 "nsid": 1, 00:12:50.211 "bdev_name": "Null3", 00:12:50.211 "name": "Null3", 00:12:50.211 "nguid": "B6F4BF8B72234EFAA1B5E07AB1D1E041", 00:12:50.211 "uuid": "b6f4bf8b-7223-4efa-a1b5-e07ab1d1e041" 00:12:50.211 } 00:12:50.211 ] 00:12:50.211 }, 00:12:50.211 { 00:12:50.211 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:50.211 "subtype": "NVMe", 00:12:50.211 "listen_addresses": [ 00:12:50.211 { 00:12:50.211 "trtype": "TCP", 00:12:50.211 "adrfam": "IPv4", 00:12:50.211 "traddr": "10.0.0.2", 00:12:50.211 "trsvcid": "4420" 00:12:50.211 } 00:12:50.211 ], 00:12:50.211 "allow_any_host": true, 00:12:50.211 "hosts": [], 00:12:50.211 "serial_number": "SPDK00000000000004", 00:12:50.211 "model_number": "SPDK bdev Controller", 00:12:50.211 "max_namespaces": 32, 00:12:50.211 "min_cntlid": 1, 00:12:50.211 "max_cntlid": 65519, 00:12:50.211 "namespaces": [ 00:12:50.211 { 00:12:50.211 "nsid": 1, 00:12:50.211 "bdev_name": "Null4", 00:12:50.211 "name": "Null4", 00:12:50.211 "nguid": "11ACE8436B6044309D5C94F9B2881FF1", 00:12:50.211 "uuid": "11ace843-6b60-4430-9d5c-94f9b2881ff1" 00:12:50.211 } 00:12:50.211 ] 00:12:50.211 } 00:12:50.211 ] 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:50.211 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.212 rmmod nvme_tcp 00:12:50.212 rmmod nvme_fabrics 00:12:50.212 rmmod nvme_keyring 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 598000 ']' 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 598000 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 598000 ']' 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 598000 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 598000 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 598000' 00:12:50.212 killing process with pid 598000 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 598000 00:12:50.212 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 598000 00:12:51.591 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:51.591 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:51.591 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:51.591 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:51.591 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:51.591 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.591 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.591 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.499 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:53.499 00:12:53.499 real 0m7.115s 00:12:53.499 user 0m8.772s 00:12:53.499 sys 0m2.031s 00:12:53.499 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.499 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.499 ************************************ 00:12:53.499 END TEST nvmf_target_discovery 00:12:53.499 ************************************ 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:53.757 ************************************ 00:12:53.757 START TEST nvmf_referrals 00:12:53.757 ************************************ 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:53.757 * Looking for test storage... 00:12:53.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:53.757 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.288 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:56.288 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:56.288 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:56.288 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:56.288 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:56.288 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:56.288 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:56.288 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:56.289 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:56.289 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:56.289 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:56.289 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:56.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:12:56.289 00:12:56.289 --- 10.0.0.2 ping statistics --- 00:12:56.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.289 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:56.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:12:56.289 00:12:56.289 --- 10.0.0.1 ping statistics --- 00:12:56.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.289 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:56.289 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:56.290 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:56.290 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:56.290 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.290 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=600339 00:12:56.290 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:56.290 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 600339 00:12:56.290 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 600339 ']' 00:12:56.290 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.290 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:56.290 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.290 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:56.290 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.290 [2024-07-26 16:18:15.701574] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:56.290 [2024-07-26 16:18:15.701721] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.290 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.290 [2024-07-26 16:18:15.844957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.548 [2024-07-26 16:18:16.113316] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.548 [2024-07-26 16:18:16.113400] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.548 [2024-07-26 16:18:16.113428] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.548 [2024-07-26 16:18:16.113456] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.548 [2024-07-26 16:18:16.113481] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.548 [2024-07-26 16:18:16.113618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.548 [2024-07-26 16:18:16.113677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.548 [2024-07-26 16:18:16.113723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.548 [2024-07-26 16:18:16.113735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.114 [2024-07-26 16:18:16.732135] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.114 [2024-07-26 16:18:16.744951] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.114 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:57.115 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:57.115 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:57.115 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:57.115 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:57.115 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:57.115 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:57.115 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:57.372 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:57.630 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:57.888 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:58.146 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:58.404 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:58.404 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:58.404 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.404 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:58.404 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.404 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:58.404 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:58.404 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.404 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:58.404 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.404 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:58.404 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:58.404 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:58.404 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:58.404 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:58.404 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:58.404 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:58.404 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:58.404 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:58.405 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:58.405 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:58.405 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:58.405 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:58.405 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:58.405 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:58.405 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:58.405 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:58.405 rmmod nvme_tcp 00:12:58.405 rmmod nvme_fabrics 00:12:58.663 rmmod nvme_keyring 00:12:58.663 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:58.663 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:58.663 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:58.663 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 600339 ']' 00:12:58.663 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 600339 00:12:58.663 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 600339 ']' 00:12:58.663 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 600339 00:12:58.663 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:58.663 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:58.663 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 600339 00:12:58.663 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:58.663 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:58.663 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 600339' 00:12:58.663 killing process with pid 600339 00:12:58.663 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 600339 00:12:58.663 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 600339 00:13:00.038 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:00.038 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:00.038 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:00.038 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:00.038 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:00.038 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.038 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.038 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:01.944 00:13:01.944 real 0m8.280s 00:13:01.944 user 0m13.675s 00:13:01.944 sys 0m2.398s 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.944 ************************************ 00:13:01.944 END TEST nvmf_referrals 00:13:01.944 ************************************ 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:01.944 ************************************ 00:13:01.944 START TEST nvmf_connect_disconnect 00:13:01.944 ************************************ 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:01.944 * Looking for test storage... 00:13:01.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.944 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:13:01.945 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:04.477 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:04.477 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:04.477 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.477 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:04.478 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:04.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:13:04.478 00:13:04.478 --- 10.0.0.2 ping statistics --- 00:13:04.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.478 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:13:04.478 00:13:04.478 --- 10.0.0.1 ping statistics --- 00:13:04.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.478 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=602837 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 602837 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 602837 ']' 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:04.478 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.478 [2024-07-26 16:18:23.979646] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:04.478 [2024-07-26 16:18:23.979787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.478 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.478 [2024-07-26 16:18:24.121594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.736 [2024-07-26 16:18:24.386474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.736 [2024-07-26 16:18:24.386547] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.736 [2024-07-26 16:18:24.386582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.736 [2024-07-26 16:18:24.386606] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.736 [2024-07-26 16:18:24.386631] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.736 [2024-07-26 16:18:24.386753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.736 [2024-07-26 16:18:24.386814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.736 [2024-07-26 16:18:24.386860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.736 [2024-07-26 16:18:24.386872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.301 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:05.301 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:13:05.301 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:05.301 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:05.301 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.301 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.301 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:05.301 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.301 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.301 [2024-07-26 16:18:24.940929] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.301 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.301 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:05.301 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.301 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.301 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.301 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:05.301 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:05.301 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.301 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.301 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.301 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:05.302 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.302 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.302 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.302 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.302 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.302 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.302 [2024-07-26 16:18:25.045468] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.302 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.302 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:05.302 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:05.302 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:05.302 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:07.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:59.609 rmmod nvme_tcp 00:16:59.609 rmmod nvme_fabrics 00:16:59.609 rmmod nvme_keyring 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 602837 ']' 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 602837 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 602837 ']' 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 602837 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 602837 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 602837' 00:16:59.609 killing process with pid 602837 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 602837 00:16:59.609 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 602837 00:17:01.513 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:01.513 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:01.514 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:01.514 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.514 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:01.514 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.514 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.514 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:03.422 00:17:03.422 real 4m1.224s 00:17:03.422 user 15m11.800s 00:17:03.422 sys 0m37.746s 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:03.422 ************************************ 00:17:03.422 END TEST nvmf_connect_disconnect 00:17:03.422 ************************************ 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.422 ************************************ 00:17:03.422 START TEST nvmf_multitarget 00:17:03.422 ************************************ 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:03.422 * Looking for test storage... 00:17:03.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:03.422 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:03.423 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.423 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.423 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.423 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:03.423 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:03.423 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:17:03.423 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:05.321 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:05.322 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:05.322 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:05.322 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:05.322 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:05.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:17:05.322 00:17:05.322 --- 10.0.0.2 ping statistics --- 00:17:05.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.322 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:17:05.322 00:17:05.322 --- 10.0.0.1 ping statistics --- 00:17:05.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.322 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:05.322 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:05.322 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=634432 00:17:05.322 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:05.322 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 634432 00:17:05.322 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 634432 ']' 00:17:05.322 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.322 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:05.322 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.322 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:05.322 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:05.580 [2024-07-26 16:22:25.096555] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:05.580 [2024-07-26 16:22:25.096706] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.580 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.580 [2024-07-26 16:22:25.240116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.838 [2024-07-26 16:22:25.504473] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.838 [2024-07-26 16:22:25.504551] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.838 [2024-07-26 16:22:25.504586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.838 [2024-07-26 16:22:25.504607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.838 [2024-07-26 16:22:25.504629] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.838 [2024-07-26 16:22:25.504752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.838 [2024-07-26 16:22:25.504814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.838 [2024-07-26 16:22:25.504861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.838 [2024-07-26 16:22:25.504871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.403 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:06.403 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:17:06.403 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:06.403 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:06.403 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:06.403 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.403 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:06.403 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:06.403 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:06.403 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:06.403 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:06.660 "nvmf_tgt_1" 00:17:06.660 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:06.660 "nvmf_tgt_2" 00:17:06.661 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:06.661 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:06.917 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:06.917 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:06.917 true 00:17:06.917 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:07.175 true 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:07.175 rmmod nvme_tcp 00:17:07.175 rmmod nvme_fabrics 00:17:07.175 rmmod nvme_keyring 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 634432 ']' 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 634432 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 634432 ']' 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 634432 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 634432 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 634432' 00:17:07.175 killing process with pid 634432 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 634432 00:17:07.175 16:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 634432 00:17:08.552 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:08.552 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:08.552 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:08.552 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.552 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:08.552 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.552 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.552 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.458 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:10.458 00:17:10.458 real 0m7.307s 00:17:10.458 user 0m11.025s 00:17:10.458 sys 0m2.032s 00:17:10.458 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:10.458 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:10.458 ************************************ 00:17:10.458 END TEST nvmf_multitarget 00:17:10.458 ************************************ 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.716 ************************************ 00:17:10.716 START TEST nvmf_rpc 00:17:10.716 ************************************ 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:10.716 * Looking for test storage... 00:17:10.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.716 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:17:10.717 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:12.620 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:12.620 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:12.620 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:12.620 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:12.620 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:12.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:17:12.621 00:17:12.621 --- 10.0.0.2 ping statistics --- 00:17:12.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.621 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:12.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:17:12.621 00:17:12.621 --- 10.0.0.1 ping statistics --- 00:17:12.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.621 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:12.621 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:12.879 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:12.879 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:12.879 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:12.879 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.879 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=636679 00:17:12.879 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:12.879 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 636679 00:17:12.879 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 636679 ']' 00:17:12.879 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.879 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:12.879 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.879 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:12.879 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.879 [2024-07-26 16:22:32.475098] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:12.879 [2024-07-26 16:22:32.475264] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.879 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.879 [2024-07-26 16:22:32.615955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:13.137 [2024-07-26 16:22:32.883400] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.137 [2024-07-26 16:22:32.883470] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.137 [2024-07-26 16:22:32.883498] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.137 [2024-07-26 16:22:32.883519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.137 [2024-07-26 16:22:32.883541] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.137 [2024-07-26 16:22:32.883666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.137 [2024-07-26 16:22:32.883731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.137 [2024-07-26 16:22:32.883787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.137 [2024-07-26 16:22:32.883794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:13.703 "tick_rate": 2700000000, 00:17:13.703 "poll_groups": [ 00:17:13.703 { 00:17:13.703 "name": "nvmf_tgt_poll_group_000", 00:17:13.703 "admin_qpairs": 0, 00:17:13.703 "io_qpairs": 0, 00:17:13.703 "current_admin_qpairs": 0, 00:17:13.703 "current_io_qpairs": 0, 00:17:13.703 "pending_bdev_io": 0, 00:17:13.703 "completed_nvme_io": 0, 00:17:13.703 "transports": [] 00:17:13.703 }, 00:17:13.703 { 00:17:13.703 "name": "nvmf_tgt_poll_group_001", 00:17:13.703 "admin_qpairs": 0, 00:17:13.703 "io_qpairs": 0, 00:17:13.703 "current_admin_qpairs": 0, 00:17:13.703 "current_io_qpairs": 0, 00:17:13.703 "pending_bdev_io": 0, 00:17:13.703 "completed_nvme_io": 0, 00:17:13.703 "transports": [] 00:17:13.703 }, 00:17:13.703 { 00:17:13.703 "name": "nvmf_tgt_poll_group_002", 00:17:13.703 "admin_qpairs": 0, 00:17:13.703 "io_qpairs": 0, 00:17:13.703 "current_admin_qpairs": 0, 00:17:13.703 "current_io_qpairs": 0, 00:17:13.703 "pending_bdev_io": 0, 00:17:13.703 "completed_nvme_io": 0, 00:17:13.703 "transports": [] 00:17:13.703 }, 00:17:13.703 { 00:17:13.703 "name": "nvmf_tgt_poll_group_003", 00:17:13.703 "admin_qpairs": 0, 00:17:13.703 "io_qpairs": 0, 00:17:13.703 "current_admin_qpairs": 0, 00:17:13.703 "current_io_qpairs": 0, 00:17:13.703 "pending_bdev_io": 0, 00:17:13.703 "completed_nvme_io": 0, 00:17:13.703 "transports": [] 00:17:13.703 } 00:17:13.703 ] 00:17:13.703 }' 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:13.703 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:13.961 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:13.961 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:13.961 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.961 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.961 [2024-07-26 16:22:33.497129] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.961 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.961 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:13.961 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.961 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.961 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.961 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:13.961 "tick_rate": 2700000000, 00:17:13.961 "poll_groups": [ 00:17:13.961 { 00:17:13.961 "name": "nvmf_tgt_poll_group_000", 00:17:13.961 "admin_qpairs": 0, 00:17:13.961 "io_qpairs": 0, 00:17:13.961 "current_admin_qpairs": 0, 00:17:13.961 "current_io_qpairs": 0, 00:17:13.961 "pending_bdev_io": 0, 00:17:13.961 "completed_nvme_io": 0, 00:17:13.961 "transports": [ 00:17:13.961 { 00:17:13.961 "trtype": "TCP" 00:17:13.961 } 00:17:13.961 ] 00:17:13.961 }, 00:17:13.961 { 00:17:13.961 "name": "nvmf_tgt_poll_group_001", 00:17:13.961 "admin_qpairs": 0, 00:17:13.961 "io_qpairs": 0, 00:17:13.961 "current_admin_qpairs": 0, 00:17:13.961 "current_io_qpairs": 0, 00:17:13.961 "pending_bdev_io": 0, 00:17:13.961 "completed_nvme_io": 0, 00:17:13.961 "transports": [ 00:17:13.961 { 00:17:13.961 "trtype": "TCP" 00:17:13.961 } 00:17:13.961 ] 00:17:13.961 }, 00:17:13.961 { 00:17:13.961 "name": "nvmf_tgt_poll_group_002", 00:17:13.961 "admin_qpairs": 0, 00:17:13.961 "io_qpairs": 0, 00:17:13.961 "current_admin_qpairs": 0, 00:17:13.961 "current_io_qpairs": 0, 00:17:13.961 "pending_bdev_io": 0, 00:17:13.961 "completed_nvme_io": 0, 00:17:13.961 "transports": [ 00:17:13.961 { 00:17:13.962 "trtype": "TCP" 00:17:13.962 } 00:17:13.962 ] 00:17:13.962 }, 00:17:13.962 { 00:17:13.962 "name": "nvmf_tgt_poll_group_003", 00:17:13.962 "admin_qpairs": 0, 00:17:13.962 "io_qpairs": 0, 00:17:13.962 "current_admin_qpairs": 0, 00:17:13.962 "current_io_qpairs": 0, 00:17:13.962 "pending_bdev_io": 0, 00:17:13.962 "completed_nvme_io": 0, 00:17:13.962 "transports": [ 00:17:13.962 { 00:17:13.962 "trtype": "TCP" 00:17:13.962 } 00:17:13.962 ] 00:17:13.962 } 00:17:13.962 ] 00:17:13.962 }' 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.962 Malloc1 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.962 [2024-07-26 16:22:33.687857] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:13.962 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:13.962 [2024-07-26 16:22:33.711024] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:14.220 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:14.220 could not add new controller: failed to write to nvme-fabrics device 00:17:14.220 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:14.220 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:14.220 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:14.220 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:14.220 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.220 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.220 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.220 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.220 16:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.786 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:14.786 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:14.786 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:14.786 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:14.786 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:16.685 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:16.685 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:16.685 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.685 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:16.685 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.685 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:16.685 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:16.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:16.943 [2024-07-26 16:22:36.517240] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:16.943 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:16.943 could not add new controller: failed to write to nvme-fabrics device 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.943 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.944 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.944 16:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.510 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:17.510 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:17.510 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.510 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:17.510 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:19.407 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:19.407 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:19.407 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.407 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:19.407 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.407 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:19.407 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:19.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.664 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:19.664 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:19.664 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:19.664 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.664 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:19.664 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.922 [2024-07-26 16:22:39.456635] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.922 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:20.491 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:20.491 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:20.491 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:20.491 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:20.491 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:23.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.024 [2024-07-26 16:22:42.390228] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.024 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:23.281 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:23.281 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:23.281 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:23.281 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:23.281 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:25.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.866 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.867 [2024-07-26 16:22:45.241521] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.867 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.867 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:25.867 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.867 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.867 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.867 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.867 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.867 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.867 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.867 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:26.431 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:26.431 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:26.431 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.431 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:26.431 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:28.334 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:28.334 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:28.334 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:28.334 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:28.334 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:28.334 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:28.334 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:28.594 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.595 [2024-07-26 16:22:48.170671] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.595 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:29.165 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:29.165 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:29.165 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:29.165 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:29.165 16:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:31.069 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:31.069 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:31.069 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:31.069 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:31.069 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:31.069 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:31.069 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:31.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.326 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:31.326 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:31.326 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:31.326 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.326 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:31.326 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.326 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:31.326 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:31.326 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.326 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.326 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.326 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.326 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.326 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.584 [2024-07-26 16:22:51.103229] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.584 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:32.150 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:32.150 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:32.150 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:32.150 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:32.150 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:34.053 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:34.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.313 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.313 [2024-07-26 16:22:54.036764] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.313 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.572 [2024-07-26 16:22:54.084811] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.572 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 [2024-07-26 16:22:54.133021] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 [2024-07-26 16:22:54.181195] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 [2024-07-26 16:22:54.229376] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:34.573 "tick_rate": 2700000000, 00:17:34.573 "poll_groups": [ 00:17:34.573 { 00:17:34.573 "name": "nvmf_tgt_poll_group_000", 00:17:34.573 "admin_qpairs": 2, 00:17:34.573 "io_qpairs": 84, 00:17:34.573 "current_admin_qpairs": 0, 00:17:34.573 "current_io_qpairs": 0, 00:17:34.573 "pending_bdev_io": 0, 00:17:34.573 "completed_nvme_io": 119, 00:17:34.573 "transports": [ 00:17:34.573 { 00:17:34.573 "trtype": "TCP" 00:17:34.573 } 00:17:34.573 ] 00:17:34.573 }, 00:17:34.573 { 00:17:34.573 "name": "nvmf_tgt_poll_group_001", 00:17:34.573 "admin_qpairs": 2, 00:17:34.573 "io_qpairs": 84, 00:17:34.573 "current_admin_qpairs": 0, 00:17:34.573 "current_io_qpairs": 0, 00:17:34.573 "pending_bdev_io": 0, 00:17:34.573 "completed_nvme_io": 170, 00:17:34.573 "transports": [ 00:17:34.573 { 00:17:34.573 "trtype": "TCP" 00:17:34.573 } 00:17:34.573 ] 00:17:34.573 }, 00:17:34.573 { 00:17:34.573 "name": "nvmf_tgt_poll_group_002", 00:17:34.573 "admin_qpairs": 1, 00:17:34.573 "io_qpairs": 84, 00:17:34.573 "current_admin_qpairs": 0, 00:17:34.573 "current_io_qpairs": 0, 00:17:34.573 "pending_bdev_io": 0, 00:17:34.573 "completed_nvme_io": 232, 00:17:34.573 "transports": [ 00:17:34.573 { 00:17:34.573 "trtype": "TCP" 00:17:34.573 } 00:17:34.573 ] 00:17:34.573 }, 00:17:34.573 { 00:17:34.573 "name": "nvmf_tgt_poll_group_003", 00:17:34.573 "admin_qpairs": 2, 00:17:34.573 "io_qpairs": 84, 00:17:34.573 "current_admin_qpairs": 0, 00:17:34.573 "current_io_qpairs": 0, 00:17:34.573 "pending_bdev_io": 0, 00:17:34.573 "completed_nvme_io": 165, 00:17:34.573 "transports": [ 00:17:34.573 { 00:17:34.573 "trtype": "TCP" 00:17:34.573 } 00:17:34.573 ] 00:17:34.573 } 00:17:34.573 ] 00:17:34.573 }' 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:34.573 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:34.574 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:34.574 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:34.574 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:34.833 rmmod nvme_tcp 00:17:34.833 rmmod nvme_fabrics 00:17:34.833 rmmod nvme_keyring 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 636679 ']' 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 636679 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 636679 ']' 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 636679 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 636679 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 636679' 00:17:34.833 killing process with pid 636679 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 636679 00:17:34.833 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 636679 00:17:36.211 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:36.211 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:36.211 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:36.211 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:36.211 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:36.211 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.211 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.211 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.750 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:38.750 00:17:38.750 real 0m27.688s 00:17:38.750 user 1m29.096s 00:17:38.750 sys 0m4.258s 00:17:38.750 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:38.750 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.750 ************************************ 00:17:38.750 END TEST nvmf_rpc 00:17:38.750 ************************************ 00:17:38.750 16:22:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:38.750 16:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:38.750 16:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:38.750 16:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:38.750 ************************************ 00:17:38.750 START TEST nvmf_invalid 00:17:38.750 ************************************ 00:17:38.750 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:38.750 * Looking for test storage... 00:17:38.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.750 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.751 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:38.751 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:38.751 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:17:38.751 16:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:40.662 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:40.663 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:40.663 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:40.663 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:40.663 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:40.663 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:40.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:17:40.663 00:17:40.663 --- 10.0.0.2 ping statistics --- 00:17:40.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.663 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:17:40.663 00:17:40.663 --- 10.0.0.1 ping statistics --- 00:17:40.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.663 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=641426 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 641426 00:17:40.663 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 641426 ']' 00:17:40.664 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.664 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:40.664 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.664 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:40.664 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:40.664 [2024-07-26 16:23:00.236880] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:40.664 [2024-07-26 16:23:00.237036] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.664 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.664 [2024-07-26 16:23:00.376189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:40.931 [2024-07-26 16:23:00.613005] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.932 [2024-07-26 16:23:00.613098] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.932 [2024-07-26 16:23:00.613128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.932 [2024-07-26 16:23:00.613146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.932 [2024-07-26 16:23:00.613176] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.932 [2024-07-26 16:23:00.613298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.932 [2024-07-26 16:23:00.613371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.932 [2024-07-26 16:23:00.613408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.932 [2024-07-26 16:23:00.613419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:41.497 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:41.497 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:41.497 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:41.497 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:41.497 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:41.497 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.497 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:41.497 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30269 00:17:41.757 [2024-07-26 16:23:01.496243] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:41.757 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:41.757 { 00:17:41.757 "nqn": "nqn.2016-06.io.spdk:cnode30269", 00:17:41.757 "tgt_name": "foobar", 00:17:41.757 "method": "nvmf_create_subsystem", 00:17:41.757 "req_id": 1 00:17:41.757 } 00:17:41.757 Got JSON-RPC error response 00:17:41.757 response: 00:17:41.757 { 00:17:41.757 "code": -32603, 00:17:41.757 "message": "Unable to find target foobar" 00:17:41.757 }' 00:17:41.757 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:41.757 { 00:17:41.757 "nqn": "nqn.2016-06.io.spdk:cnode30269", 00:17:41.757 "tgt_name": "foobar", 00:17:41.757 "method": "nvmf_create_subsystem", 00:17:41.757 "req_id": 1 00:17:41.757 } 00:17:41.757 Got JSON-RPC error response 00:17:41.757 response: 00:17:41.757 { 00:17:41.757 "code": -32603, 00:17:41.757 "message": "Unable to find target foobar" 00:17:41.757 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:42.015 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:42.015 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5616 00:17:42.272 [2024-07-26 16:23:01.797375] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5616: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:42.273 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:42.273 { 00:17:42.273 "nqn": "nqn.2016-06.io.spdk:cnode5616", 00:17:42.273 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:42.273 "method": "nvmf_create_subsystem", 00:17:42.273 "req_id": 1 00:17:42.273 } 00:17:42.273 Got JSON-RPC error response 00:17:42.273 response: 00:17:42.273 { 00:17:42.273 "code": -32602, 00:17:42.273 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:42.273 }' 00:17:42.273 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:42.273 { 00:17:42.273 "nqn": "nqn.2016-06.io.spdk:cnode5616", 00:17:42.273 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:42.273 "method": "nvmf_create_subsystem", 00:17:42.273 "req_id": 1 00:17:42.273 } 00:17:42.273 Got JSON-RPC error response 00:17:42.273 response: 00:17:42.273 { 00:17:42.273 "code": -32602, 00:17:42.273 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:42.273 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:42.273 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:42.273 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3897 00:17:42.531 [2024-07-26 16:23:02.058226] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3897: invalid model number 'SPDK_Controller' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:42.531 { 00:17:42.531 "nqn": "nqn.2016-06.io.spdk:cnode3897", 00:17:42.531 "model_number": "SPDK_Controller\u001f", 00:17:42.531 "method": "nvmf_create_subsystem", 00:17:42.531 "req_id": 1 00:17:42.531 } 00:17:42.531 Got JSON-RPC error response 00:17:42.531 response: 00:17:42.531 { 00:17:42.531 "code": -32602, 00:17:42.531 "message": "Invalid MN SPDK_Controller\u001f" 00:17:42.531 }' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:42.531 { 00:17:42.531 "nqn": "nqn.2016-06.io.spdk:cnode3897", 00:17:42.531 "model_number": "SPDK_Controller\u001f", 00:17:42.531 "method": "nvmf_create_subsystem", 00:17:42.531 "req_id": 1 00:17:42.531 } 00:17:42.531 Got JSON-RPC error response 00:17:42.531 response: 00:17:42.531 { 00:17:42.531 "code": -32602, 00:17:42.531 "message": "Invalid MN SPDK_Controller\u001f" 00:17:42.531 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.531 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ L == \- ]] 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'LTSvC;}c`uvjs-eeqnil' 00:17:42.532 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'LTSvC;}c`uvjs-eeqnil' nqn.2016-06.io.spdk:cnode21091 00:17:42.791 [2024-07-26 16:23:02.363319] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21091: invalid serial number 'LTSvC;}c`uvjs-eeqnil' 00:17:42.791 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:42.791 { 00:17:42.791 "nqn": "nqn.2016-06.io.spdk:cnode21091", 00:17:42.791 "serial_number": "LT\u007fSvC;}c`uvjs-eeqnil", 00:17:42.791 "method": "nvmf_create_subsystem", 00:17:42.791 "req_id": 1 00:17:42.791 } 00:17:42.791 Got JSON-RPC error response 00:17:42.791 response: 00:17:42.791 { 00:17:42.791 "code": -32602, 00:17:42.791 "message": "Invalid SN LT\u007fSvC;}c`uvjs-eeqnil" 00:17:42.791 }' 00:17:42.791 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:42.791 { 00:17:42.791 "nqn": "nqn.2016-06.io.spdk:cnode21091", 00:17:42.791 "serial_number": "LT\u007fSvC;}c`uvjs-eeqnil", 00:17:42.791 "method": "nvmf_create_subsystem", 00:17:42.791 "req_id": 1 00:17:42.791 } 00:17:42.791 Got JSON-RPC error response 00:17:42.791 response: 00:17:42.792 { 00:17:42.792 "code": -32602, 00:17:42.792 "message": "Invalid SN LT\u007fSvC;}c`uvjs-eeqnil" 00:17:42.792 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:42.792 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ M == \- ]] 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'MDH7*~F^9@|nAAlRxAxhW"ZY,U)mhF}'\''29.auQ\&' 00:17:42.793 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'MDH7*~F^9@|nAAlRxAxhW"ZY,U)mhF}'\''29.auQ\&' nqn.2016-06.io.spdk:cnode10657 00:17:43.051 [2024-07-26 16:23:02.728596] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10657: invalid model number 'MDH7*~F^9@|nAAlRxAxhW"ZY,U)mhF}'29.auQ\&' 00:17:43.051 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:43.051 { 00:17:43.051 "nqn": "nqn.2016-06.io.spdk:cnode10657", 00:17:43.051 "model_number": "MDH7*~\u007fF^9@|nAAlRxAxhW\"ZY,U)mhF}'\''29.auQ\\&", 00:17:43.051 "method": "nvmf_create_subsystem", 00:17:43.051 "req_id": 1 00:17:43.051 } 00:17:43.051 Got JSON-RPC error response 00:17:43.051 response: 00:17:43.051 { 00:17:43.051 "code": -32602, 00:17:43.051 "message": "Invalid MN MDH7*~\u007fF^9@|nAAlRxAxhW\"ZY,U)mhF}'\''29.auQ\\&" 00:17:43.051 }' 00:17:43.051 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:43.051 { 00:17:43.051 "nqn": "nqn.2016-06.io.spdk:cnode10657", 00:17:43.051 "model_number": "MDH7*~\u007fF^9@|nAAlRxAxhW\"ZY,U)mhF}'29.auQ\\&", 00:17:43.051 "method": "nvmf_create_subsystem", 00:17:43.051 "req_id": 1 00:17:43.051 } 00:17:43.051 Got JSON-RPC error response 00:17:43.051 response: 00:17:43.051 { 00:17:43.051 "code": -32602, 00:17:43.051 "message": "Invalid MN MDH7*~\u007fF^9@|nAAlRxAxhW\"ZY,U)mhF}'29.auQ\\&" 00:17:43.051 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:43.051 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:43.309 [2024-07-26 16:23:02.981539] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.309 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:43.567 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:43.567 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:43.567 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:43.567 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:43.567 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:43.824 [2024-07-26 16:23:03.476662] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:43.824 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:43.824 { 00:17:43.824 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:43.824 "listen_address": { 00:17:43.824 "trtype": "tcp", 00:17:43.824 "traddr": "", 00:17:43.824 "trsvcid": "4421" 00:17:43.824 }, 00:17:43.824 "method": "nvmf_subsystem_remove_listener", 00:17:43.824 "req_id": 1 00:17:43.824 } 00:17:43.824 Got JSON-RPC error response 00:17:43.824 response: 00:17:43.824 { 00:17:43.824 "code": -32602, 00:17:43.824 "message": "Invalid parameters" 00:17:43.824 }' 00:17:43.824 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:43.824 { 00:17:43.824 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:43.824 "listen_address": { 00:17:43.824 "trtype": "tcp", 00:17:43.824 "traddr": "", 00:17:43.824 "trsvcid": "4421" 00:17:43.824 }, 00:17:43.824 "method": "nvmf_subsystem_remove_listener", 00:17:43.824 "req_id": 1 00:17:43.824 } 00:17:43.824 Got JSON-RPC error response 00:17:43.824 response: 00:17:43.824 { 00:17:43.824 "code": -32602, 00:17:43.824 "message": "Invalid parameters" 00:17:43.824 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:43.824 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9528 -i 0 00:17:44.081 [2024-07-26 16:23:03.733484] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9528: invalid cntlid range [0-65519] 00:17:44.081 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:44.081 { 00:17:44.081 "nqn": "nqn.2016-06.io.spdk:cnode9528", 00:17:44.081 "min_cntlid": 0, 00:17:44.081 "method": "nvmf_create_subsystem", 00:17:44.081 "req_id": 1 00:17:44.081 } 00:17:44.081 Got JSON-RPC error response 00:17:44.081 response: 00:17:44.081 { 00:17:44.081 "code": -32602, 00:17:44.081 "message": "Invalid cntlid range [0-65519]" 00:17:44.081 }' 00:17:44.081 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:44.081 { 00:17:44.081 "nqn": "nqn.2016-06.io.spdk:cnode9528", 00:17:44.081 "min_cntlid": 0, 00:17:44.081 "method": "nvmf_create_subsystem", 00:17:44.081 "req_id": 1 00:17:44.081 } 00:17:44.081 Got JSON-RPC error response 00:17:44.081 response: 00:17:44.081 { 00:17:44.081 "code": -32602, 00:17:44.081 "message": "Invalid cntlid range [0-65519]" 00:17:44.081 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:44.081 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6679 -i 65520 00:17:44.338 [2024-07-26 16:23:03.994329] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6679: invalid cntlid range [65520-65519] 00:17:44.338 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:44.338 { 00:17:44.338 "nqn": "nqn.2016-06.io.spdk:cnode6679", 00:17:44.338 "min_cntlid": 65520, 00:17:44.338 "method": "nvmf_create_subsystem", 00:17:44.338 "req_id": 1 00:17:44.338 } 00:17:44.338 Got JSON-RPC error response 00:17:44.338 response: 00:17:44.338 { 00:17:44.338 "code": -32602, 00:17:44.338 "message": "Invalid cntlid range [65520-65519]" 00:17:44.338 }' 00:17:44.338 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:44.338 { 00:17:44.338 "nqn": "nqn.2016-06.io.spdk:cnode6679", 00:17:44.338 "min_cntlid": 65520, 00:17:44.338 "method": "nvmf_create_subsystem", 00:17:44.338 "req_id": 1 00:17:44.338 } 00:17:44.338 Got JSON-RPC error response 00:17:44.338 response: 00:17:44.338 { 00:17:44.338 "code": -32602, 00:17:44.338 "message": "Invalid cntlid range [65520-65519]" 00:17:44.338 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:44.338 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12065 -I 0 00:17:44.595 [2024-07-26 16:23:04.251166] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12065: invalid cntlid range [1-0] 00:17:44.595 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:44.595 { 00:17:44.595 "nqn": "nqn.2016-06.io.spdk:cnode12065", 00:17:44.595 "max_cntlid": 0, 00:17:44.595 "method": "nvmf_create_subsystem", 00:17:44.595 "req_id": 1 00:17:44.595 } 00:17:44.595 Got JSON-RPC error response 00:17:44.595 response: 00:17:44.595 { 00:17:44.595 "code": -32602, 00:17:44.595 "message": "Invalid cntlid range [1-0]" 00:17:44.595 }' 00:17:44.595 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:44.595 { 00:17:44.595 "nqn": "nqn.2016-06.io.spdk:cnode12065", 00:17:44.595 "max_cntlid": 0, 00:17:44.595 "method": "nvmf_create_subsystem", 00:17:44.595 "req_id": 1 00:17:44.595 } 00:17:44.595 Got JSON-RPC error response 00:17:44.595 response: 00:17:44.595 { 00:17:44.595 "code": -32602, 00:17:44.595 "message": "Invalid cntlid range [1-0]" 00:17:44.595 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:44.595 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31204 -I 65520 00:17:44.852 [2024-07-26 16:23:04.496122] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31204: invalid cntlid range [1-65520] 00:17:44.852 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:44.852 { 00:17:44.852 "nqn": "nqn.2016-06.io.spdk:cnode31204", 00:17:44.852 "max_cntlid": 65520, 00:17:44.852 "method": "nvmf_create_subsystem", 00:17:44.852 "req_id": 1 00:17:44.852 } 00:17:44.852 Got JSON-RPC error response 00:17:44.852 response: 00:17:44.852 { 00:17:44.852 "code": -32602, 00:17:44.852 "message": "Invalid cntlid range [1-65520]" 00:17:44.852 }' 00:17:44.852 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:44.852 { 00:17:44.852 "nqn": "nqn.2016-06.io.spdk:cnode31204", 00:17:44.852 "max_cntlid": 65520, 00:17:44.852 "method": "nvmf_create_subsystem", 00:17:44.852 "req_id": 1 00:17:44.852 } 00:17:44.852 Got JSON-RPC error response 00:17:44.852 response: 00:17:44.852 { 00:17:44.852 "code": -32602, 00:17:44.852 "message": "Invalid cntlid range [1-65520]" 00:17:44.852 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:44.852 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode120 -i 6 -I 5 00:17:45.111 [2024-07-26 16:23:04.740960] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode120: invalid cntlid range [6-5] 00:17:45.111 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:45.111 { 00:17:45.111 "nqn": "nqn.2016-06.io.spdk:cnode120", 00:17:45.111 "min_cntlid": 6, 00:17:45.111 "max_cntlid": 5, 00:17:45.111 "method": "nvmf_create_subsystem", 00:17:45.111 "req_id": 1 00:17:45.111 } 00:17:45.111 Got JSON-RPC error response 00:17:45.111 response: 00:17:45.111 { 00:17:45.111 "code": -32602, 00:17:45.111 "message": "Invalid cntlid range [6-5]" 00:17:45.111 }' 00:17:45.111 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:45.111 { 00:17:45.111 "nqn": "nqn.2016-06.io.spdk:cnode120", 00:17:45.111 "min_cntlid": 6, 00:17:45.111 "max_cntlid": 5, 00:17:45.111 "method": "nvmf_create_subsystem", 00:17:45.111 "req_id": 1 00:17:45.111 } 00:17:45.111 Got JSON-RPC error response 00:17:45.111 response: 00:17:45.111 { 00:17:45.111 "code": -32602, 00:17:45.111 "message": "Invalid cntlid range [6-5]" 00:17:45.111 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:45.111 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:45.371 { 00:17:45.371 "name": "foobar", 00:17:45.371 "method": "nvmf_delete_target", 00:17:45.371 "req_id": 1 00:17:45.371 } 00:17:45.371 Got JSON-RPC error response 00:17:45.371 response: 00:17:45.371 { 00:17:45.371 "code": -32602, 00:17:45.371 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:45.371 }' 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:45.371 { 00:17:45.371 "name": "foobar", 00:17:45.371 "method": "nvmf_delete_target", 00:17:45.371 "req_id": 1 00:17:45.371 } 00:17:45.371 Got JSON-RPC error response 00:17:45.371 response: 00:17:45.371 { 00:17:45.371 "code": -32602, 00:17:45.371 "message": "The specified target doesn't exist, cannot delete it." 00:17:45.371 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:45.371 rmmod nvme_tcp 00:17:45.371 rmmod nvme_fabrics 00:17:45.371 rmmod nvme_keyring 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 641426 ']' 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 641426 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 641426 ']' 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 641426 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 641426 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 641426' 00:17:45.371 killing process with pid 641426 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 641426 00:17:45.371 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 641426 00:17:46.745 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:46.745 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:46.745 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:46.745 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:46.745 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:46.745 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.745 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.745 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.651 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:48.651 00:17:48.651 real 0m10.267s 00:17:48.651 user 0m25.020s 00:17:48.651 sys 0m2.581s 00:17:48.651 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:48.651 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:48.651 ************************************ 00:17:48.651 END TEST nvmf_invalid 00:17:48.651 ************************************ 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:48.652 ************************************ 00:17:48.652 START TEST nvmf_connect_stress 00:17:48.652 ************************************ 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:48.652 * Looking for test storage... 00:17:48.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:17:48.652 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:50.557 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:50.557 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:50.557 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.557 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:50.557 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.558 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:50.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:17:50.817 00:17:50.817 --- 10.0.0.2 ping statistics --- 00:17:50.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.817 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:17:50.817 00:17:50.817 --- 10.0.0.1 ping statistics --- 00:17:50.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.817 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=644193 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 644193 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 644193 ']' 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:50.817 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.817 [2024-07-26 16:23:10.498736] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:50.817 [2024-07-26 16:23:10.498885] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.817 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.077 [2024-07-26 16:23:10.630631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:51.337 [2024-07-26 16:23:10.876118] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.337 [2024-07-26 16:23:10.876231] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.337 [2024-07-26 16:23:10.876267] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.337 [2024-07-26 16:23:10.876299] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.337 [2024-07-26 16:23:10.876322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.337 [2024-07-26 16:23:10.876488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.337 [2024-07-26 16:23:10.876563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.337 [2024-07-26 16:23:10.876572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.907 [2024-07-26 16:23:11.487617] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.907 [2024-07-26 16:23:11.520779] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.907 NULL1 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=644346 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.907 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.907 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.166 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.166 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:52.166 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.166 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.166 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.732 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.732 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:52.732 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.732 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.732 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.992 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.992 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:52.992 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.992 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.992 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.252 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.252 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:53.252 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.252 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.252 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.510 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.510 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:53.510 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.510 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.510 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.769 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.769 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:53.769 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.769 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.769 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.335 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.335 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:54.335 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.335 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.335 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.595 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.595 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:54.595 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.595 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.595 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.856 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.856 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:54.856 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.856 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.856 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.115 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.115 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:55.115 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.115 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.115 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.679 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.679 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:55.679 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.679 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.679 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.938 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.938 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:55.938 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.938 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.938 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.197 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.197 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:56.197 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.197 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.197 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.456 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.457 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:56.457 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.457 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.457 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.716 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.716 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:56.716 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.716 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.716 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.331 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.331 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:57.331 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.331 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.331 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.591 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.591 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:57.591 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.591 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.591 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.851 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.851 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:57.851 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.852 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.852 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.111 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.111 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:58.111 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.111 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.111 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.371 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.371 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:58.371 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.371 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.371 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.629 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.629 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:58.629 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.629 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.629 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.197 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.197 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:59.197 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.197 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.197 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.457 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.457 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:59.457 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.457 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.457 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.717 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.717 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:59.717 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.717 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.717 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.976 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.976 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:17:59.976 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.976 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.976 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.542 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.542 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:18:00.542 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.542 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.542 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.801 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.801 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:18:00.801 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.801 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.801 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.061 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:18:01.061 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.061 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.061 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.321 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:18:01.321 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.321 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.321 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.580 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.580 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:18:01.580 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.580 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.580 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.146 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.146 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:18:02.146 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.146 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.146 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.146 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.404 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.404 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 644346 00:18:02.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (644346) - No such process 00:18:02.404 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 644346 00:18:02.404 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:02.404 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:02.404 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:02.404 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:02.404 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:18:02.404 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:02.404 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:18:02.404 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:02.404 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:02.404 rmmod nvme_tcp 00:18:02.404 rmmod nvme_fabrics 00:18:02.404 rmmod nvme_keyring 00:18:02.404 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:02.404 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:18:02.404 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:18:02.404 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 644193 ']' 00:18:02.404 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 644193 00:18:02.404 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 644193 ']' 00:18:02.404 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 644193 00:18:02.404 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:18:02.404 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:02.404 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 644193 00:18:02.404 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:02.404 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:02.404 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 644193' 00:18:02.404 killing process with pid 644193 00:18:02.404 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 644193 00:18:02.404 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 644193 00:18:03.789 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:03.789 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:03.789 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:03.789 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:03.789 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:03.789 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.789 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.789 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:05.696 00:18:05.696 real 0m17.052s 00:18:05.696 user 0m42.669s 00:18:05.696 sys 0m5.706s 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:05.696 ************************************ 00:18:05.696 END TEST nvmf_connect_stress 00:18:05.696 ************************************ 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:05.696 ************************************ 00:18:05.696 START TEST nvmf_fused_ordering 00:18:05.696 ************************************ 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:05.696 * Looking for test storage... 00:18:05.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:18:05.696 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:05.697 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:05.697 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.697 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.697 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.697 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:05.697 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:05.954 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:05.954 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:05.954 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:05.954 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.954 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:05.954 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:05.954 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:05.954 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.954 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.954 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.954 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:05.954 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:05.954 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:18:05.954 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:07.856 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:07.857 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:07.857 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:07.857 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:07.857 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:07.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:18:07.857 00:18:07.857 --- 10.0.0.2 ping statistics --- 00:18:07.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.857 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:18:07.857 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:18:07.857 00:18:07.857 --- 10.0.0.1 ping statistics --- 00:18:07.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.857 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=647616 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 647616 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 647616 ']' 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:07.858 16:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.117 [2024-07-26 16:23:27.640456] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:08.117 [2024-07-26 16:23:27.640584] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.117 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.117 [2024-07-26 16:23:27.781160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.377 [2024-07-26 16:23:28.044998] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.377 [2024-07-26 16:23:28.045090] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.377 [2024-07-26 16:23:28.045120] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.377 [2024-07-26 16:23:28.045145] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.377 [2024-07-26 16:23:28.045168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.377 [2024-07-26 16:23:28.045223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.942 [2024-07-26 16:23:28.578303] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.942 [2024-07-26 16:23:28.594531] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.942 NULL1 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.942 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.943 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.943 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:08.943 [2024-07-26 16:23:28.667458] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:08.943 [2024-07-26 16:23:28.667570] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid647769 ] 00:18:09.202 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.770 Attached to nqn.2016-06.io.spdk:cnode1 00:18:09.770 Namespace ID: 1 size: 1GB 00:18:09.770 fused_ordering(0) 00:18:09.770 fused_ordering(1) 00:18:09.770 fused_ordering(2) 00:18:09.770 fused_ordering(3) 00:18:09.770 fused_ordering(4) 00:18:09.770 fused_ordering(5) 00:18:09.770 fused_ordering(6) 00:18:09.770 fused_ordering(7) 00:18:09.770 fused_ordering(8) 00:18:09.770 fused_ordering(9) 00:18:09.770 fused_ordering(10) 00:18:09.770 fused_ordering(11) 00:18:09.770 fused_ordering(12) 00:18:09.770 fused_ordering(13) 00:18:09.770 fused_ordering(14) 00:18:09.770 fused_ordering(15) 00:18:09.770 fused_ordering(16) 00:18:09.770 fused_ordering(17) 00:18:09.770 fused_ordering(18) 00:18:09.770 fused_ordering(19) 00:18:09.770 fused_ordering(20) 00:18:09.770 fused_ordering(21) 00:18:09.770 fused_ordering(22) 00:18:09.770 fused_ordering(23) 00:18:09.770 fused_ordering(24) 00:18:09.770 fused_ordering(25) 00:18:09.770 fused_ordering(26) 00:18:09.770 fused_ordering(27) 00:18:09.770 fused_ordering(28) 00:18:09.770 fused_ordering(29) 00:18:09.770 fused_ordering(30) 00:18:09.770 fused_ordering(31) 00:18:09.770 fused_ordering(32) 00:18:09.770 fused_ordering(33) 00:18:09.770 fused_ordering(34) 00:18:09.770 fused_ordering(35) 00:18:09.770 fused_ordering(36) 00:18:09.770 fused_ordering(37) 00:18:09.770 fused_ordering(38) 00:18:09.770 fused_ordering(39) 00:18:09.770 fused_ordering(40) 00:18:09.770 fused_ordering(41) 00:18:09.770 fused_ordering(42) 00:18:09.770 fused_ordering(43) 00:18:09.770 fused_ordering(44) 00:18:09.770 fused_ordering(45) 00:18:09.770 fused_ordering(46) 00:18:09.770 fused_ordering(47) 00:18:09.770 fused_ordering(48) 00:18:09.770 fused_ordering(49) 00:18:09.770 fused_ordering(50) 00:18:09.770 fused_ordering(51) 00:18:09.770 fused_ordering(52) 00:18:09.770 fused_ordering(53) 00:18:09.770 fused_ordering(54) 00:18:09.770 fused_ordering(55) 00:18:09.770 fused_ordering(56) 00:18:09.770 fused_ordering(57) 00:18:09.770 fused_ordering(58) 00:18:09.770 fused_ordering(59) 00:18:09.770 fused_ordering(60) 00:18:09.770 fused_ordering(61) 00:18:09.770 fused_ordering(62) 00:18:09.770 fused_ordering(63) 00:18:09.770 fused_ordering(64) 00:18:09.770 fused_ordering(65) 00:18:09.770 fused_ordering(66) 00:18:09.770 fused_ordering(67) 00:18:09.770 fused_ordering(68) 00:18:09.770 fused_ordering(69) 00:18:09.770 fused_ordering(70) 00:18:09.770 fused_ordering(71) 00:18:09.770 fused_ordering(72) 00:18:09.770 fused_ordering(73) 00:18:09.770 fused_ordering(74) 00:18:09.770 fused_ordering(75) 00:18:09.770 fused_ordering(76) 00:18:09.770 fused_ordering(77) 00:18:09.770 fused_ordering(78) 00:18:09.770 fused_ordering(79) 00:18:09.770 fused_ordering(80) 00:18:09.770 fused_ordering(81) 00:18:09.770 fused_ordering(82) 00:18:09.770 fused_ordering(83) 00:18:09.770 fused_ordering(84) 00:18:09.770 fused_ordering(85) 00:18:09.770 fused_ordering(86) 00:18:09.770 fused_ordering(87) 00:18:09.770 fused_ordering(88) 00:18:09.770 fused_ordering(89) 00:18:09.770 fused_ordering(90) 00:18:09.770 fused_ordering(91) 00:18:09.770 fused_ordering(92) 00:18:09.770 fused_ordering(93) 00:18:09.770 fused_ordering(94) 00:18:09.770 fused_ordering(95) 00:18:09.770 fused_ordering(96) 00:18:09.770 fused_ordering(97) 00:18:09.770 fused_ordering(98) 00:18:09.770 fused_ordering(99) 00:18:09.770 fused_ordering(100) 00:18:09.770 fused_ordering(101) 00:18:09.770 fused_ordering(102) 00:18:09.770 fused_ordering(103) 00:18:09.770 fused_ordering(104) 00:18:09.770 fused_ordering(105) 00:18:09.770 fused_ordering(106) 00:18:09.770 fused_ordering(107) 00:18:09.770 fused_ordering(108) 00:18:09.770 fused_ordering(109) 00:18:09.770 fused_ordering(110) 00:18:09.770 fused_ordering(111) 00:18:09.770 fused_ordering(112) 00:18:09.770 fused_ordering(113) 00:18:09.770 fused_ordering(114) 00:18:09.770 fused_ordering(115) 00:18:09.770 fused_ordering(116) 00:18:09.770 fused_ordering(117) 00:18:09.770 fused_ordering(118) 00:18:09.770 fused_ordering(119) 00:18:09.770 fused_ordering(120) 00:18:09.770 fused_ordering(121) 00:18:09.770 fused_ordering(122) 00:18:09.770 fused_ordering(123) 00:18:09.770 fused_ordering(124) 00:18:09.770 fused_ordering(125) 00:18:09.770 fused_ordering(126) 00:18:09.770 fused_ordering(127) 00:18:09.770 fused_ordering(128) 00:18:09.770 fused_ordering(129) 00:18:09.770 fused_ordering(130) 00:18:09.770 fused_ordering(131) 00:18:09.770 fused_ordering(132) 00:18:09.770 fused_ordering(133) 00:18:09.770 fused_ordering(134) 00:18:09.770 fused_ordering(135) 00:18:09.770 fused_ordering(136) 00:18:09.770 fused_ordering(137) 00:18:09.771 fused_ordering(138) 00:18:09.771 fused_ordering(139) 00:18:09.771 fused_ordering(140) 00:18:09.771 fused_ordering(141) 00:18:09.771 fused_ordering(142) 00:18:09.771 fused_ordering(143) 00:18:09.771 fused_ordering(144) 00:18:09.771 fused_ordering(145) 00:18:09.771 fused_ordering(146) 00:18:09.771 fused_ordering(147) 00:18:09.771 fused_ordering(148) 00:18:09.771 fused_ordering(149) 00:18:09.771 fused_ordering(150) 00:18:09.771 fused_ordering(151) 00:18:09.771 fused_ordering(152) 00:18:09.771 fused_ordering(153) 00:18:09.771 fused_ordering(154) 00:18:09.771 fused_ordering(155) 00:18:09.771 fused_ordering(156) 00:18:09.771 fused_ordering(157) 00:18:09.771 fused_ordering(158) 00:18:09.771 fused_ordering(159) 00:18:09.771 fused_ordering(160) 00:18:09.771 fused_ordering(161) 00:18:09.771 fused_ordering(162) 00:18:09.771 fused_ordering(163) 00:18:09.771 fused_ordering(164) 00:18:09.771 fused_ordering(165) 00:18:09.771 fused_ordering(166) 00:18:09.771 fused_ordering(167) 00:18:09.771 fused_ordering(168) 00:18:09.771 fused_ordering(169) 00:18:09.771 fused_ordering(170) 00:18:09.771 fused_ordering(171) 00:18:09.771 fused_ordering(172) 00:18:09.771 fused_ordering(173) 00:18:09.771 fused_ordering(174) 00:18:09.771 fused_ordering(175) 00:18:09.771 fused_ordering(176) 00:18:09.771 fused_ordering(177) 00:18:09.771 fused_ordering(178) 00:18:09.771 fused_ordering(179) 00:18:09.771 fused_ordering(180) 00:18:09.771 fused_ordering(181) 00:18:09.771 fused_ordering(182) 00:18:09.771 fused_ordering(183) 00:18:09.771 fused_ordering(184) 00:18:09.771 fused_ordering(185) 00:18:09.771 fused_ordering(186) 00:18:09.771 fused_ordering(187) 00:18:09.771 fused_ordering(188) 00:18:09.771 fused_ordering(189) 00:18:09.771 fused_ordering(190) 00:18:09.771 fused_ordering(191) 00:18:09.771 fused_ordering(192) 00:18:09.771 fused_ordering(193) 00:18:09.771 fused_ordering(194) 00:18:09.771 fused_ordering(195) 00:18:09.771 fused_ordering(196) 00:18:09.771 fused_ordering(197) 00:18:09.771 fused_ordering(198) 00:18:09.771 fused_ordering(199) 00:18:09.771 fused_ordering(200) 00:18:09.771 fused_ordering(201) 00:18:09.771 fused_ordering(202) 00:18:09.771 fused_ordering(203) 00:18:09.771 fused_ordering(204) 00:18:09.771 fused_ordering(205) 00:18:10.339 fused_ordering(206) 00:18:10.339 fused_ordering(207) 00:18:10.339 fused_ordering(208) 00:18:10.339 fused_ordering(209) 00:18:10.339 fused_ordering(210) 00:18:10.339 fused_ordering(211) 00:18:10.339 fused_ordering(212) 00:18:10.339 fused_ordering(213) 00:18:10.339 fused_ordering(214) 00:18:10.339 fused_ordering(215) 00:18:10.339 fused_ordering(216) 00:18:10.339 fused_ordering(217) 00:18:10.339 fused_ordering(218) 00:18:10.339 fused_ordering(219) 00:18:10.339 fused_ordering(220) 00:18:10.339 fused_ordering(221) 00:18:10.339 fused_ordering(222) 00:18:10.339 fused_ordering(223) 00:18:10.339 fused_ordering(224) 00:18:10.339 fused_ordering(225) 00:18:10.339 fused_ordering(226) 00:18:10.339 fused_ordering(227) 00:18:10.339 fused_ordering(228) 00:18:10.339 fused_ordering(229) 00:18:10.339 fused_ordering(230) 00:18:10.339 fused_ordering(231) 00:18:10.339 fused_ordering(232) 00:18:10.339 fused_ordering(233) 00:18:10.339 fused_ordering(234) 00:18:10.339 fused_ordering(235) 00:18:10.339 fused_ordering(236) 00:18:10.339 fused_ordering(237) 00:18:10.339 fused_ordering(238) 00:18:10.339 fused_ordering(239) 00:18:10.339 fused_ordering(240) 00:18:10.339 fused_ordering(241) 00:18:10.339 fused_ordering(242) 00:18:10.339 fused_ordering(243) 00:18:10.339 fused_ordering(244) 00:18:10.339 fused_ordering(245) 00:18:10.339 fused_ordering(246) 00:18:10.339 fused_ordering(247) 00:18:10.339 fused_ordering(248) 00:18:10.339 fused_ordering(249) 00:18:10.339 fused_ordering(250) 00:18:10.339 fused_ordering(251) 00:18:10.339 fused_ordering(252) 00:18:10.339 fused_ordering(253) 00:18:10.339 fused_ordering(254) 00:18:10.339 fused_ordering(255) 00:18:10.339 fused_ordering(256) 00:18:10.339 fused_ordering(257) 00:18:10.339 fused_ordering(258) 00:18:10.339 fused_ordering(259) 00:18:10.339 fused_ordering(260) 00:18:10.339 fused_ordering(261) 00:18:10.339 fused_ordering(262) 00:18:10.339 fused_ordering(263) 00:18:10.339 fused_ordering(264) 00:18:10.339 fused_ordering(265) 00:18:10.339 fused_ordering(266) 00:18:10.339 fused_ordering(267) 00:18:10.339 fused_ordering(268) 00:18:10.339 fused_ordering(269) 00:18:10.339 fused_ordering(270) 00:18:10.339 fused_ordering(271) 00:18:10.339 fused_ordering(272) 00:18:10.339 fused_ordering(273) 00:18:10.339 fused_ordering(274) 00:18:10.339 fused_ordering(275) 00:18:10.339 fused_ordering(276) 00:18:10.339 fused_ordering(277) 00:18:10.339 fused_ordering(278) 00:18:10.339 fused_ordering(279) 00:18:10.339 fused_ordering(280) 00:18:10.339 fused_ordering(281) 00:18:10.339 fused_ordering(282) 00:18:10.339 fused_ordering(283) 00:18:10.339 fused_ordering(284) 00:18:10.339 fused_ordering(285) 00:18:10.339 fused_ordering(286) 00:18:10.339 fused_ordering(287) 00:18:10.339 fused_ordering(288) 00:18:10.339 fused_ordering(289) 00:18:10.339 fused_ordering(290) 00:18:10.339 fused_ordering(291) 00:18:10.339 fused_ordering(292) 00:18:10.339 fused_ordering(293) 00:18:10.339 fused_ordering(294) 00:18:10.339 fused_ordering(295) 00:18:10.339 fused_ordering(296) 00:18:10.339 fused_ordering(297) 00:18:10.339 fused_ordering(298) 00:18:10.339 fused_ordering(299) 00:18:10.339 fused_ordering(300) 00:18:10.339 fused_ordering(301) 00:18:10.339 fused_ordering(302) 00:18:10.339 fused_ordering(303) 00:18:10.339 fused_ordering(304) 00:18:10.339 fused_ordering(305) 00:18:10.339 fused_ordering(306) 00:18:10.339 fused_ordering(307) 00:18:10.339 fused_ordering(308) 00:18:10.339 fused_ordering(309) 00:18:10.339 fused_ordering(310) 00:18:10.339 fused_ordering(311) 00:18:10.339 fused_ordering(312) 00:18:10.339 fused_ordering(313) 00:18:10.339 fused_ordering(314) 00:18:10.339 fused_ordering(315) 00:18:10.339 fused_ordering(316) 00:18:10.339 fused_ordering(317) 00:18:10.339 fused_ordering(318) 00:18:10.339 fused_ordering(319) 00:18:10.339 fused_ordering(320) 00:18:10.339 fused_ordering(321) 00:18:10.339 fused_ordering(322) 00:18:10.339 fused_ordering(323) 00:18:10.339 fused_ordering(324) 00:18:10.339 fused_ordering(325) 00:18:10.339 fused_ordering(326) 00:18:10.339 fused_ordering(327) 00:18:10.339 fused_ordering(328) 00:18:10.339 fused_ordering(329) 00:18:10.339 fused_ordering(330) 00:18:10.339 fused_ordering(331) 00:18:10.339 fused_ordering(332) 00:18:10.339 fused_ordering(333) 00:18:10.339 fused_ordering(334) 00:18:10.339 fused_ordering(335) 00:18:10.339 fused_ordering(336) 00:18:10.339 fused_ordering(337) 00:18:10.339 fused_ordering(338) 00:18:10.339 fused_ordering(339) 00:18:10.339 fused_ordering(340) 00:18:10.339 fused_ordering(341) 00:18:10.339 fused_ordering(342) 00:18:10.339 fused_ordering(343) 00:18:10.339 fused_ordering(344) 00:18:10.339 fused_ordering(345) 00:18:10.339 fused_ordering(346) 00:18:10.339 fused_ordering(347) 00:18:10.339 fused_ordering(348) 00:18:10.339 fused_ordering(349) 00:18:10.339 fused_ordering(350) 00:18:10.340 fused_ordering(351) 00:18:10.340 fused_ordering(352) 00:18:10.340 fused_ordering(353) 00:18:10.340 fused_ordering(354) 00:18:10.340 fused_ordering(355) 00:18:10.340 fused_ordering(356) 00:18:10.340 fused_ordering(357) 00:18:10.340 fused_ordering(358) 00:18:10.340 fused_ordering(359) 00:18:10.340 fused_ordering(360) 00:18:10.340 fused_ordering(361) 00:18:10.340 fused_ordering(362) 00:18:10.340 fused_ordering(363) 00:18:10.340 fused_ordering(364) 00:18:10.340 fused_ordering(365) 00:18:10.340 fused_ordering(366) 00:18:10.340 fused_ordering(367) 00:18:10.340 fused_ordering(368) 00:18:10.340 fused_ordering(369) 00:18:10.340 fused_ordering(370) 00:18:10.340 fused_ordering(371) 00:18:10.340 fused_ordering(372) 00:18:10.340 fused_ordering(373) 00:18:10.340 fused_ordering(374) 00:18:10.340 fused_ordering(375) 00:18:10.340 fused_ordering(376) 00:18:10.340 fused_ordering(377) 00:18:10.340 fused_ordering(378) 00:18:10.340 fused_ordering(379) 00:18:10.340 fused_ordering(380) 00:18:10.340 fused_ordering(381) 00:18:10.340 fused_ordering(382) 00:18:10.340 fused_ordering(383) 00:18:10.340 fused_ordering(384) 00:18:10.340 fused_ordering(385) 00:18:10.340 fused_ordering(386) 00:18:10.340 fused_ordering(387) 00:18:10.340 fused_ordering(388) 00:18:10.340 fused_ordering(389) 00:18:10.340 fused_ordering(390) 00:18:10.340 fused_ordering(391) 00:18:10.340 fused_ordering(392) 00:18:10.340 fused_ordering(393) 00:18:10.340 fused_ordering(394) 00:18:10.340 fused_ordering(395) 00:18:10.340 fused_ordering(396) 00:18:10.340 fused_ordering(397) 00:18:10.340 fused_ordering(398) 00:18:10.340 fused_ordering(399) 00:18:10.340 fused_ordering(400) 00:18:10.340 fused_ordering(401) 00:18:10.340 fused_ordering(402) 00:18:10.340 fused_ordering(403) 00:18:10.340 fused_ordering(404) 00:18:10.340 fused_ordering(405) 00:18:10.340 fused_ordering(406) 00:18:10.340 fused_ordering(407) 00:18:10.340 fused_ordering(408) 00:18:10.340 fused_ordering(409) 00:18:10.340 fused_ordering(410) 00:18:11.280 fused_ordering(411) 00:18:11.280 fused_ordering(412) 00:18:11.280 fused_ordering(413) 00:18:11.280 fused_ordering(414) 00:18:11.280 fused_ordering(415) 00:18:11.280 fused_ordering(416) 00:18:11.280 fused_ordering(417) 00:18:11.280 fused_ordering(418) 00:18:11.280 fused_ordering(419) 00:18:11.280 fused_ordering(420) 00:18:11.280 fused_ordering(421) 00:18:11.280 fused_ordering(422) 00:18:11.280 fused_ordering(423) 00:18:11.280 fused_ordering(424) 00:18:11.280 fused_ordering(425) 00:18:11.280 fused_ordering(426) 00:18:11.280 fused_ordering(427) 00:18:11.280 fused_ordering(428) 00:18:11.280 fused_ordering(429) 00:18:11.280 fused_ordering(430) 00:18:11.280 fused_ordering(431) 00:18:11.280 fused_ordering(432) 00:18:11.280 fused_ordering(433) 00:18:11.280 fused_ordering(434) 00:18:11.280 fused_ordering(435) 00:18:11.280 fused_ordering(436) 00:18:11.280 fused_ordering(437) 00:18:11.280 fused_ordering(438) 00:18:11.280 fused_ordering(439) 00:18:11.280 fused_ordering(440) 00:18:11.280 fused_ordering(441) 00:18:11.280 fused_ordering(442) 00:18:11.280 fused_ordering(443) 00:18:11.280 fused_ordering(444) 00:18:11.280 fused_ordering(445) 00:18:11.280 fused_ordering(446) 00:18:11.280 fused_ordering(447) 00:18:11.280 fused_ordering(448) 00:18:11.280 fused_ordering(449) 00:18:11.280 fused_ordering(450) 00:18:11.280 fused_ordering(451) 00:18:11.280 fused_ordering(452) 00:18:11.280 fused_ordering(453) 00:18:11.280 fused_ordering(454) 00:18:11.280 fused_ordering(455) 00:18:11.280 fused_ordering(456) 00:18:11.280 fused_ordering(457) 00:18:11.280 fused_ordering(458) 00:18:11.280 fused_ordering(459) 00:18:11.280 fused_ordering(460) 00:18:11.280 fused_ordering(461) 00:18:11.280 fused_ordering(462) 00:18:11.280 fused_ordering(463) 00:18:11.280 fused_ordering(464) 00:18:11.280 fused_ordering(465) 00:18:11.280 fused_ordering(466) 00:18:11.280 fused_ordering(467) 00:18:11.280 fused_ordering(468) 00:18:11.280 fused_ordering(469) 00:18:11.280 fused_ordering(470) 00:18:11.280 fused_ordering(471) 00:18:11.280 fused_ordering(472) 00:18:11.280 fused_ordering(473) 00:18:11.280 fused_ordering(474) 00:18:11.280 fused_ordering(475) 00:18:11.280 fused_ordering(476) 00:18:11.280 fused_ordering(477) 00:18:11.280 fused_ordering(478) 00:18:11.280 fused_ordering(479) 00:18:11.280 fused_ordering(480) 00:18:11.280 fused_ordering(481) 00:18:11.280 fused_ordering(482) 00:18:11.280 fused_ordering(483) 00:18:11.280 fused_ordering(484) 00:18:11.280 fused_ordering(485) 00:18:11.280 fused_ordering(486) 00:18:11.280 fused_ordering(487) 00:18:11.280 fused_ordering(488) 00:18:11.280 fused_ordering(489) 00:18:11.280 fused_ordering(490) 00:18:11.280 fused_ordering(491) 00:18:11.280 fused_ordering(492) 00:18:11.280 fused_ordering(493) 00:18:11.280 fused_ordering(494) 00:18:11.280 fused_ordering(495) 00:18:11.280 fused_ordering(496) 00:18:11.280 fused_ordering(497) 00:18:11.280 fused_ordering(498) 00:18:11.280 fused_ordering(499) 00:18:11.280 fused_ordering(500) 00:18:11.280 fused_ordering(501) 00:18:11.280 fused_ordering(502) 00:18:11.280 fused_ordering(503) 00:18:11.280 fused_ordering(504) 00:18:11.280 fused_ordering(505) 00:18:11.280 fused_ordering(506) 00:18:11.280 fused_ordering(507) 00:18:11.280 fused_ordering(508) 00:18:11.280 fused_ordering(509) 00:18:11.280 fused_ordering(510) 00:18:11.280 fused_ordering(511) 00:18:11.280 fused_ordering(512) 00:18:11.280 fused_ordering(513) 00:18:11.280 fused_ordering(514) 00:18:11.280 fused_ordering(515) 00:18:11.280 fused_ordering(516) 00:18:11.280 fused_ordering(517) 00:18:11.280 fused_ordering(518) 00:18:11.280 fused_ordering(519) 00:18:11.280 fused_ordering(520) 00:18:11.280 fused_ordering(521) 00:18:11.280 fused_ordering(522) 00:18:11.280 fused_ordering(523) 00:18:11.280 fused_ordering(524) 00:18:11.280 fused_ordering(525) 00:18:11.280 fused_ordering(526) 00:18:11.280 fused_ordering(527) 00:18:11.280 fused_ordering(528) 00:18:11.280 fused_ordering(529) 00:18:11.280 fused_ordering(530) 00:18:11.280 fused_ordering(531) 00:18:11.280 fused_ordering(532) 00:18:11.280 fused_ordering(533) 00:18:11.280 fused_ordering(534) 00:18:11.280 fused_ordering(535) 00:18:11.280 fused_ordering(536) 00:18:11.280 fused_ordering(537) 00:18:11.280 fused_ordering(538) 00:18:11.280 fused_ordering(539) 00:18:11.280 fused_ordering(540) 00:18:11.280 fused_ordering(541) 00:18:11.280 fused_ordering(542) 00:18:11.280 fused_ordering(543) 00:18:11.280 fused_ordering(544) 00:18:11.280 fused_ordering(545) 00:18:11.280 fused_ordering(546) 00:18:11.280 fused_ordering(547) 00:18:11.280 fused_ordering(548) 00:18:11.280 fused_ordering(549) 00:18:11.280 fused_ordering(550) 00:18:11.280 fused_ordering(551) 00:18:11.280 fused_ordering(552) 00:18:11.280 fused_ordering(553) 00:18:11.280 fused_ordering(554) 00:18:11.280 fused_ordering(555) 00:18:11.280 fused_ordering(556) 00:18:11.280 fused_ordering(557) 00:18:11.280 fused_ordering(558) 00:18:11.280 fused_ordering(559) 00:18:11.280 fused_ordering(560) 00:18:11.280 fused_ordering(561) 00:18:11.280 fused_ordering(562) 00:18:11.280 fused_ordering(563) 00:18:11.280 fused_ordering(564) 00:18:11.280 fused_ordering(565) 00:18:11.280 fused_ordering(566) 00:18:11.280 fused_ordering(567) 00:18:11.280 fused_ordering(568) 00:18:11.280 fused_ordering(569) 00:18:11.280 fused_ordering(570) 00:18:11.280 fused_ordering(571) 00:18:11.280 fused_ordering(572) 00:18:11.280 fused_ordering(573) 00:18:11.280 fused_ordering(574) 00:18:11.280 fused_ordering(575) 00:18:11.280 fused_ordering(576) 00:18:11.280 fused_ordering(577) 00:18:11.280 fused_ordering(578) 00:18:11.280 fused_ordering(579) 00:18:11.280 fused_ordering(580) 00:18:11.280 fused_ordering(581) 00:18:11.280 fused_ordering(582) 00:18:11.280 fused_ordering(583) 00:18:11.280 fused_ordering(584) 00:18:11.280 fused_ordering(585) 00:18:11.280 fused_ordering(586) 00:18:11.280 fused_ordering(587) 00:18:11.280 fused_ordering(588) 00:18:11.280 fused_ordering(589) 00:18:11.280 fused_ordering(590) 00:18:11.280 fused_ordering(591) 00:18:11.280 fused_ordering(592) 00:18:11.280 fused_ordering(593) 00:18:11.280 fused_ordering(594) 00:18:11.280 fused_ordering(595) 00:18:11.280 fused_ordering(596) 00:18:11.280 fused_ordering(597) 00:18:11.280 fused_ordering(598) 00:18:11.280 fused_ordering(599) 00:18:11.280 fused_ordering(600) 00:18:11.280 fused_ordering(601) 00:18:11.280 fused_ordering(602) 00:18:11.280 fused_ordering(603) 00:18:11.280 fused_ordering(604) 00:18:11.280 fused_ordering(605) 00:18:11.280 fused_ordering(606) 00:18:11.280 fused_ordering(607) 00:18:11.280 fused_ordering(608) 00:18:11.280 fused_ordering(609) 00:18:11.280 fused_ordering(610) 00:18:11.280 fused_ordering(611) 00:18:11.280 fused_ordering(612) 00:18:11.280 fused_ordering(613) 00:18:11.280 fused_ordering(614) 00:18:11.280 fused_ordering(615) 00:18:11.848 fused_ordering(616) 00:18:11.848 fused_ordering(617) 00:18:11.848 fused_ordering(618) 00:18:11.848 fused_ordering(619) 00:18:11.848 fused_ordering(620) 00:18:11.848 fused_ordering(621) 00:18:11.848 fused_ordering(622) 00:18:11.848 fused_ordering(623) 00:18:11.848 fused_ordering(624) 00:18:11.848 fused_ordering(625) 00:18:11.848 fused_ordering(626) 00:18:11.848 fused_ordering(627) 00:18:11.848 fused_ordering(628) 00:18:11.848 fused_ordering(629) 00:18:11.848 fused_ordering(630) 00:18:11.848 fused_ordering(631) 00:18:11.848 fused_ordering(632) 00:18:11.848 fused_ordering(633) 00:18:11.848 fused_ordering(634) 00:18:11.848 fused_ordering(635) 00:18:11.848 fused_ordering(636) 00:18:11.848 fused_ordering(637) 00:18:11.848 fused_ordering(638) 00:18:11.848 fused_ordering(639) 00:18:11.848 fused_ordering(640) 00:18:11.848 fused_ordering(641) 00:18:11.848 fused_ordering(642) 00:18:11.848 fused_ordering(643) 00:18:11.848 fused_ordering(644) 00:18:11.848 fused_ordering(645) 00:18:11.848 fused_ordering(646) 00:18:11.848 fused_ordering(647) 00:18:11.848 fused_ordering(648) 00:18:11.848 fused_ordering(649) 00:18:11.848 fused_ordering(650) 00:18:11.848 fused_ordering(651) 00:18:11.848 fused_ordering(652) 00:18:11.848 fused_ordering(653) 00:18:11.848 fused_ordering(654) 00:18:11.848 fused_ordering(655) 00:18:11.848 fused_ordering(656) 00:18:11.848 fused_ordering(657) 00:18:11.848 fused_ordering(658) 00:18:11.848 fused_ordering(659) 00:18:11.848 fused_ordering(660) 00:18:11.848 fused_ordering(661) 00:18:11.848 fused_ordering(662) 00:18:11.848 fused_ordering(663) 00:18:11.848 fused_ordering(664) 00:18:11.848 fused_ordering(665) 00:18:11.848 fused_ordering(666) 00:18:11.848 fused_ordering(667) 00:18:11.848 fused_ordering(668) 00:18:11.848 fused_ordering(669) 00:18:11.848 fused_ordering(670) 00:18:11.848 fused_ordering(671) 00:18:11.848 fused_ordering(672) 00:18:11.848 fused_ordering(673) 00:18:11.848 fused_ordering(674) 00:18:11.848 fused_ordering(675) 00:18:11.848 fused_ordering(676) 00:18:11.848 fused_ordering(677) 00:18:11.848 fused_ordering(678) 00:18:11.848 fused_ordering(679) 00:18:11.848 fused_ordering(680) 00:18:11.848 fused_ordering(681) 00:18:11.848 fused_ordering(682) 00:18:11.848 fused_ordering(683) 00:18:11.848 fused_ordering(684) 00:18:11.848 fused_ordering(685) 00:18:11.848 fused_ordering(686) 00:18:11.848 fused_ordering(687) 00:18:11.848 fused_ordering(688) 00:18:11.848 fused_ordering(689) 00:18:11.848 fused_ordering(690) 00:18:11.848 fused_ordering(691) 00:18:11.848 fused_ordering(692) 00:18:11.848 fused_ordering(693) 00:18:11.848 fused_ordering(694) 00:18:11.848 fused_ordering(695) 00:18:11.848 fused_ordering(696) 00:18:11.848 fused_ordering(697) 00:18:11.848 fused_ordering(698) 00:18:11.848 fused_ordering(699) 00:18:11.848 fused_ordering(700) 00:18:11.848 fused_ordering(701) 00:18:11.848 fused_ordering(702) 00:18:11.848 fused_ordering(703) 00:18:11.848 fused_ordering(704) 00:18:11.848 fused_ordering(705) 00:18:11.848 fused_ordering(706) 00:18:11.848 fused_ordering(707) 00:18:11.848 fused_ordering(708) 00:18:11.848 fused_ordering(709) 00:18:11.848 fused_ordering(710) 00:18:11.848 fused_ordering(711) 00:18:11.848 fused_ordering(712) 00:18:11.848 fused_ordering(713) 00:18:11.848 fused_ordering(714) 00:18:11.848 fused_ordering(715) 00:18:11.848 fused_ordering(716) 00:18:11.848 fused_ordering(717) 00:18:11.848 fused_ordering(718) 00:18:11.848 fused_ordering(719) 00:18:11.848 fused_ordering(720) 00:18:11.848 fused_ordering(721) 00:18:11.848 fused_ordering(722) 00:18:11.848 fused_ordering(723) 00:18:11.848 fused_ordering(724) 00:18:11.848 fused_ordering(725) 00:18:11.849 fused_ordering(726) 00:18:11.849 fused_ordering(727) 00:18:11.849 fused_ordering(728) 00:18:11.849 fused_ordering(729) 00:18:11.849 fused_ordering(730) 00:18:11.849 fused_ordering(731) 00:18:11.849 fused_ordering(732) 00:18:11.849 fused_ordering(733) 00:18:11.849 fused_ordering(734) 00:18:11.849 fused_ordering(735) 00:18:11.849 fused_ordering(736) 00:18:11.849 fused_ordering(737) 00:18:11.849 fused_ordering(738) 00:18:11.849 fused_ordering(739) 00:18:11.849 fused_ordering(740) 00:18:11.849 fused_ordering(741) 00:18:11.849 fused_ordering(742) 00:18:11.849 fused_ordering(743) 00:18:11.849 fused_ordering(744) 00:18:11.849 fused_ordering(745) 00:18:11.849 fused_ordering(746) 00:18:11.849 fused_ordering(747) 00:18:11.849 fused_ordering(748) 00:18:11.849 fused_ordering(749) 00:18:11.849 fused_ordering(750) 00:18:11.849 fused_ordering(751) 00:18:11.849 fused_ordering(752) 00:18:11.849 fused_ordering(753) 00:18:11.849 fused_ordering(754) 00:18:11.849 fused_ordering(755) 00:18:11.849 fused_ordering(756) 00:18:11.849 fused_ordering(757) 00:18:11.849 fused_ordering(758) 00:18:11.849 fused_ordering(759) 00:18:11.849 fused_ordering(760) 00:18:11.849 fused_ordering(761) 00:18:11.849 fused_ordering(762) 00:18:11.849 fused_ordering(763) 00:18:11.849 fused_ordering(764) 00:18:11.849 fused_ordering(765) 00:18:11.849 fused_ordering(766) 00:18:11.849 fused_ordering(767) 00:18:11.849 fused_ordering(768) 00:18:11.849 fused_ordering(769) 00:18:11.849 fused_ordering(770) 00:18:11.849 fused_ordering(771) 00:18:11.849 fused_ordering(772) 00:18:11.849 fused_ordering(773) 00:18:11.849 fused_ordering(774) 00:18:11.849 fused_ordering(775) 00:18:11.849 fused_ordering(776) 00:18:11.849 fused_ordering(777) 00:18:11.849 fused_ordering(778) 00:18:11.849 fused_ordering(779) 00:18:11.849 fused_ordering(780) 00:18:11.849 fused_ordering(781) 00:18:11.849 fused_ordering(782) 00:18:11.849 fused_ordering(783) 00:18:11.849 fused_ordering(784) 00:18:11.849 fused_ordering(785) 00:18:11.849 fused_ordering(786) 00:18:11.849 fused_ordering(787) 00:18:11.849 fused_ordering(788) 00:18:11.849 fused_ordering(789) 00:18:11.849 fused_ordering(790) 00:18:11.849 fused_ordering(791) 00:18:11.849 fused_ordering(792) 00:18:11.849 fused_ordering(793) 00:18:11.849 fused_ordering(794) 00:18:11.849 fused_ordering(795) 00:18:11.849 fused_ordering(796) 00:18:11.849 fused_ordering(797) 00:18:11.849 fused_ordering(798) 00:18:11.849 fused_ordering(799) 00:18:11.849 fused_ordering(800) 00:18:11.849 fused_ordering(801) 00:18:11.849 fused_ordering(802) 00:18:11.849 fused_ordering(803) 00:18:11.849 fused_ordering(804) 00:18:11.849 fused_ordering(805) 00:18:11.849 fused_ordering(806) 00:18:11.849 fused_ordering(807) 00:18:11.849 fused_ordering(808) 00:18:11.849 fused_ordering(809) 00:18:11.849 fused_ordering(810) 00:18:11.849 fused_ordering(811) 00:18:11.849 fused_ordering(812) 00:18:11.849 fused_ordering(813) 00:18:11.849 fused_ordering(814) 00:18:11.849 fused_ordering(815) 00:18:11.849 fused_ordering(816) 00:18:11.849 fused_ordering(817) 00:18:11.849 fused_ordering(818) 00:18:11.849 fused_ordering(819) 00:18:11.849 fused_ordering(820) 00:18:12.784 fused_ordering(821) 00:18:12.784 fused_ordering(822) 00:18:12.784 fused_ordering(823) 00:18:12.784 fused_ordering(824) 00:18:12.784 fused_ordering(825) 00:18:12.784 fused_ordering(826) 00:18:12.784 fused_ordering(827) 00:18:12.784 fused_ordering(828) 00:18:12.784 fused_ordering(829) 00:18:12.784 fused_ordering(830) 00:18:12.784 fused_ordering(831) 00:18:12.784 fused_ordering(832) 00:18:12.784 fused_ordering(833) 00:18:12.784 fused_ordering(834) 00:18:12.784 fused_ordering(835) 00:18:12.784 fused_ordering(836) 00:18:12.784 fused_ordering(837) 00:18:12.784 fused_ordering(838) 00:18:12.784 fused_ordering(839) 00:18:12.784 fused_ordering(840) 00:18:12.784 fused_ordering(841) 00:18:12.784 fused_ordering(842) 00:18:12.784 fused_ordering(843) 00:18:12.784 fused_ordering(844) 00:18:12.784 fused_ordering(845) 00:18:12.784 fused_ordering(846) 00:18:12.784 fused_ordering(847) 00:18:12.784 fused_ordering(848) 00:18:12.784 fused_ordering(849) 00:18:12.784 fused_ordering(850) 00:18:12.784 fused_ordering(851) 00:18:12.784 fused_ordering(852) 00:18:12.784 fused_ordering(853) 00:18:12.784 fused_ordering(854) 00:18:12.784 fused_ordering(855) 00:18:12.784 fused_ordering(856) 00:18:12.784 fused_ordering(857) 00:18:12.784 fused_ordering(858) 00:18:12.784 fused_ordering(859) 00:18:12.784 fused_ordering(860) 00:18:12.784 fused_ordering(861) 00:18:12.784 fused_ordering(862) 00:18:12.784 fused_ordering(863) 00:18:12.784 fused_ordering(864) 00:18:12.784 fused_ordering(865) 00:18:12.784 fused_ordering(866) 00:18:12.784 fused_ordering(867) 00:18:12.784 fused_ordering(868) 00:18:12.784 fused_ordering(869) 00:18:12.784 fused_ordering(870) 00:18:12.784 fused_ordering(871) 00:18:12.784 fused_ordering(872) 00:18:12.784 fused_ordering(873) 00:18:12.784 fused_ordering(874) 00:18:12.784 fused_ordering(875) 00:18:12.784 fused_ordering(876) 00:18:12.784 fused_ordering(877) 00:18:12.784 fused_ordering(878) 00:18:12.784 fused_ordering(879) 00:18:12.784 fused_ordering(880) 00:18:12.784 fused_ordering(881) 00:18:12.784 fused_ordering(882) 00:18:12.784 fused_ordering(883) 00:18:12.784 fused_ordering(884) 00:18:12.784 fused_ordering(885) 00:18:12.784 fused_ordering(886) 00:18:12.784 fused_ordering(887) 00:18:12.784 fused_ordering(888) 00:18:12.784 fused_ordering(889) 00:18:12.784 fused_ordering(890) 00:18:12.784 fused_ordering(891) 00:18:12.784 fused_ordering(892) 00:18:12.784 fused_ordering(893) 00:18:12.784 fused_ordering(894) 00:18:12.784 fused_ordering(895) 00:18:12.784 fused_ordering(896) 00:18:12.784 fused_ordering(897) 00:18:12.784 fused_ordering(898) 00:18:12.784 fused_ordering(899) 00:18:12.784 fused_ordering(900) 00:18:12.784 fused_ordering(901) 00:18:12.784 fused_ordering(902) 00:18:12.784 fused_ordering(903) 00:18:12.784 fused_ordering(904) 00:18:12.784 fused_ordering(905) 00:18:12.784 fused_ordering(906) 00:18:12.784 fused_ordering(907) 00:18:12.784 fused_ordering(908) 00:18:12.784 fused_ordering(909) 00:18:12.784 fused_ordering(910) 00:18:12.784 fused_ordering(911) 00:18:12.784 fused_ordering(912) 00:18:12.784 fused_ordering(913) 00:18:12.784 fused_ordering(914) 00:18:12.784 fused_ordering(915) 00:18:12.784 fused_ordering(916) 00:18:12.784 fused_ordering(917) 00:18:12.784 fused_ordering(918) 00:18:12.784 fused_ordering(919) 00:18:12.784 fused_ordering(920) 00:18:12.784 fused_ordering(921) 00:18:12.784 fused_ordering(922) 00:18:12.784 fused_ordering(923) 00:18:12.784 fused_ordering(924) 00:18:12.784 fused_ordering(925) 00:18:12.784 fused_ordering(926) 00:18:12.784 fused_ordering(927) 00:18:12.784 fused_ordering(928) 00:18:12.784 fused_ordering(929) 00:18:12.784 fused_ordering(930) 00:18:12.784 fused_ordering(931) 00:18:12.784 fused_ordering(932) 00:18:12.784 fused_ordering(933) 00:18:12.784 fused_ordering(934) 00:18:12.784 fused_ordering(935) 00:18:12.784 fused_ordering(936) 00:18:12.784 fused_ordering(937) 00:18:12.784 fused_ordering(938) 00:18:12.784 fused_ordering(939) 00:18:12.784 fused_ordering(940) 00:18:12.784 fused_ordering(941) 00:18:12.784 fused_ordering(942) 00:18:12.784 fused_ordering(943) 00:18:12.784 fused_ordering(944) 00:18:12.784 fused_ordering(945) 00:18:12.784 fused_ordering(946) 00:18:12.784 fused_ordering(947) 00:18:12.784 fused_ordering(948) 00:18:12.784 fused_ordering(949) 00:18:12.784 fused_ordering(950) 00:18:12.784 fused_ordering(951) 00:18:12.784 fused_ordering(952) 00:18:12.784 fused_ordering(953) 00:18:12.784 fused_ordering(954) 00:18:12.784 fused_ordering(955) 00:18:12.784 fused_ordering(956) 00:18:12.784 fused_ordering(957) 00:18:12.784 fused_ordering(958) 00:18:12.784 fused_ordering(959) 00:18:12.784 fused_ordering(960) 00:18:12.784 fused_ordering(961) 00:18:12.784 fused_ordering(962) 00:18:12.784 fused_ordering(963) 00:18:12.784 fused_ordering(964) 00:18:12.784 fused_ordering(965) 00:18:12.784 fused_ordering(966) 00:18:12.784 fused_ordering(967) 00:18:12.784 fused_ordering(968) 00:18:12.784 fused_ordering(969) 00:18:12.784 fused_ordering(970) 00:18:12.784 fused_ordering(971) 00:18:12.784 fused_ordering(972) 00:18:12.784 fused_ordering(973) 00:18:12.784 fused_ordering(974) 00:18:12.784 fused_ordering(975) 00:18:12.784 fused_ordering(976) 00:18:12.784 fused_ordering(977) 00:18:12.784 fused_ordering(978) 00:18:12.784 fused_ordering(979) 00:18:12.784 fused_ordering(980) 00:18:12.784 fused_ordering(981) 00:18:12.784 fused_ordering(982) 00:18:12.784 fused_ordering(983) 00:18:12.784 fused_ordering(984) 00:18:12.784 fused_ordering(985) 00:18:12.784 fused_ordering(986) 00:18:12.784 fused_ordering(987) 00:18:12.784 fused_ordering(988) 00:18:12.784 fused_ordering(989) 00:18:12.784 fused_ordering(990) 00:18:12.784 fused_ordering(991) 00:18:12.784 fused_ordering(992) 00:18:12.784 fused_ordering(993) 00:18:12.784 fused_ordering(994) 00:18:12.784 fused_ordering(995) 00:18:12.784 fused_ordering(996) 00:18:12.784 fused_ordering(997) 00:18:12.785 fused_ordering(998) 00:18:12.785 fused_ordering(999) 00:18:12.785 fused_ordering(1000) 00:18:12.785 fused_ordering(1001) 00:18:12.785 fused_ordering(1002) 00:18:12.785 fused_ordering(1003) 00:18:12.785 fused_ordering(1004) 00:18:12.785 fused_ordering(1005) 00:18:12.785 fused_ordering(1006) 00:18:12.785 fused_ordering(1007) 00:18:12.785 fused_ordering(1008) 00:18:12.785 fused_ordering(1009) 00:18:12.785 fused_ordering(1010) 00:18:12.785 fused_ordering(1011) 00:18:12.785 fused_ordering(1012) 00:18:12.785 fused_ordering(1013) 00:18:12.785 fused_ordering(1014) 00:18:12.785 fused_ordering(1015) 00:18:12.785 fused_ordering(1016) 00:18:12.785 fused_ordering(1017) 00:18:12.785 fused_ordering(1018) 00:18:12.785 fused_ordering(1019) 00:18:12.785 fused_ordering(1020) 00:18:12.785 fused_ordering(1021) 00:18:12.785 fused_ordering(1022) 00:18:12.785 fused_ordering(1023) 00:18:12.785 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:12.785 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:12.785 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:12.785 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:18:12.785 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:12.785 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:18:12.785 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.785 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:12.785 rmmod nvme_tcp 00:18:13.069 rmmod nvme_fabrics 00:18:13.069 rmmod nvme_keyring 00:18:13.069 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:13.069 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:18:13.069 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:18:13.069 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 647616 ']' 00:18:13.069 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 647616 00:18:13.069 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 647616 ']' 00:18:13.069 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 647616 00:18:13.069 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:18:13.069 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:13.069 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 647616 00:18:13.069 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:13.069 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:13.069 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 647616' 00:18:13.069 killing process with pid 647616 00:18:13.069 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 647616 00:18:13.069 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 647616 00:18:14.450 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:14.450 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:14.450 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:14.450 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.450 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.450 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.450 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.450 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.352 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:16.352 00:18:16.352 real 0m10.641s 00:18:16.352 user 0m8.400s 00:18:16.352 sys 0m4.290s 00:18:16.352 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:16.352 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:16.352 ************************************ 00:18:16.352 END TEST nvmf_fused_ordering 00:18:16.352 ************************************ 00:18:16.352 16:23:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:16.352 16:23:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:16.352 16:23:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:16.352 16:23:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:16.352 ************************************ 00:18:16.352 START TEST nvmf_ns_masking 00:18:16.352 ************************************ 00:18:16.352 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:16.610 * Looking for test storage... 00:18:16.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.610 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d85b6e72-ef7e-4870-9c6b-2b44493af134 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3fb115e1-f174-43c5-8c87-b7da9fabf2e2 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=edc10b02-7edf-4d75-8db8-11f89e8f7c45 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:18:16.611 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:18.515 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:18.515 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:18.515 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:18.515 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:18.515 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:18.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:18:18.515 00:18:18.515 --- 10.0.0.2 ping statistics --- 00:18:18.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.516 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:18.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:18:18.516 00:18:18.516 --- 10.0.0.1 ping statistics --- 00:18:18.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.516 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=650356 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 650356 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 650356 ']' 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.516 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:18.775 [2024-07-26 16:23:38.291934] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:18.775 [2024-07-26 16:23:38.292101] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.775 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.775 [2024-07-26 16:23:38.432459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.035 [2024-07-26 16:23:38.658486] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.035 [2024-07-26 16:23:38.658562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.035 [2024-07-26 16:23:38.658586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.035 [2024-07-26 16:23:38.658607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.035 [2024-07-26 16:23:38.658625] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.035 [2024-07-26 16:23:38.658666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.600 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.600 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:19.600 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:19.600 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:19.600 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:19.600 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.600 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:19.858 [2024-07-26 16:23:39.561604] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.858 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:19.858 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:19.858 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:20.424 Malloc1 00:18:20.424 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:20.681 Malloc2 00:18:20.681 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:20.938 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:21.196 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.455 [2024-07-26 16:23:41.057638] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.455 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:21.455 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I edc10b02-7edf-4d75-8db8-11f89e8f7c45 -a 10.0.0.2 -s 4420 -i 4 00:18:21.455 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:21.455 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:21.455 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.455 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:21.455 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:23.991 [ 0]:0x1 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca316e8612ff4a4ca20baad0853e114d 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca316e8612ff4a4ca20baad0853e114d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:23.991 [ 0]:0x1 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca316e8612ff4a4ca20baad0853e114d 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca316e8612ff4a4ca20baad0853e114d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:23.991 [ 1]:0x2 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f77532094c6407d8fe4a567f56f1868 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f77532094c6407d8fe4a567f56f1868 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:23.991 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:24.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.249 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:24.508 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:24.767 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:24.767 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I edc10b02-7edf-4d75-8db8-11f89e8f7c45 -a 10.0.0.2 -s 4420 -i 4 00:18:24.767 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:24.767 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:24.767 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:24.767 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:18:24.767 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:18:24.767 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:27.292 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:27.292 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:27.292 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:27.292 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:27.292 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.292 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:27.292 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:27.292 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:27.292 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:27.292 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:27.292 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:27.292 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:27.293 [ 0]:0x2 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f77532094c6407d8fe4a567f56f1868 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f77532094c6407d8fe4a567f56f1868 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:27.293 [ 0]:0x1 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca316e8612ff4a4ca20baad0853e114d 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca316e8612ff4a4ca20baad0853e114d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:27.293 [ 1]:0x2 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f77532094c6407d8fe4a567f56f1868 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f77532094c6407d8fe4a567f56f1868 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.293 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:27.551 [ 0]:0x2 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f77532094c6407d8fe4a567f56f1868 00:18:27.551 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f77532094c6407d8fe4a567f56f1868 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.809 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:27.809 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:27.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.809 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:28.067 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:28.067 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I edc10b02-7edf-4d75-8db8-11f89e8f7c45 -a 10.0.0.2 -s 4420 -i 4 00:18:28.067 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:28.067 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:28.067 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:28.067 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:28.067 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:28.067 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:30.597 [ 0]:0x1 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca316e8612ff4a4ca20baad0853e114d 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca316e8612ff4a4ca20baad0853e114d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:30.597 [ 1]:0x2 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f77532094c6407d8fe4a567f56f1868 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f77532094c6407d8fe4a567f56f1868 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.597 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:30.597 [ 0]:0x2 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f77532094c6407d8fe4a567f56f1868 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f77532094c6407d8fe4a567f56f1868 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:30.597 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:30.855 [2024-07-26 16:23:50.547683] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:30.855 request: 00:18:30.855 { 00:18:30.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.855 "nsid": 2, 00:18:30.855 "host": "nqn.2016-06.io.spdk:host1", 00:18:30.855 "method": "nvmf_ns_remove_host", 00:18:30.855 "req_id": 1 00:18:30.855 } 00:18:30.855 Got JSON-RPC error response 00:18:30.855 response: 00:18:30.855 { 00:18:30.855 "code": -32602, 00:18:30.855 "message": "Invalid parameters" 00:18:30.855 } 00:18:30.855 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:30.855 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:30.855 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:30.855 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:30.855 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:30.855 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:30.855 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:30.855 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:30.856 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.856 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:30.856 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.856 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:30.856 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:30.856 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:30.856 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:30.856 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.856 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:30.856 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.856 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:30.856 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:30.856 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:30.856 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:30.856 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:31.114 [ 0]:0x2 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f77532094c6407d8fe4a567f56f1868 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f77532094c6407d8fe4a567f56f1868 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:31.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=651981 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 651981 /var/tmp/host.sock 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 651981 ']' 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:31.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:31.114 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:31.372 [2024-07-26 16:23:50.926513] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:31.372 [2024-07-26 16:23:50.926662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid651981 ] 00:18:31.372 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.372 [2024-07-26 16:23:51.058433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.658 [2024-07-26 16:23:51.318003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.592 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:32.592 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:32.592 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:32.849 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:33.108 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d85b6e72-ef7e-4870-9c6b-2b44493af134 00:18:33.108 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:18:33.108 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D85B6E72EF7E48709C6B2B44493AF134 -i 00:18:33.365 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3fb115e1-f174-43c5-8c87-b7da9fabf2e2 00:18:33.365 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:18:33.365 16:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3FB115E1F17443C58C87B7DA9FABF2E2 -i 00:18:33.621 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:33.878 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:34.135 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:34.135 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:34.392 nvme0n1 00:18:34.392 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:34.392 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:34.957 nvme1n2 00:18:34.957 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:34.957 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:34.957 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:34.957 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:34.957 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:35.215 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:35.215 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:35.215 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:35.215 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:35.472 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d85b6e72-ef7e-4870-9c6b-2b44493af134 == \d\8\5\b\6\e\7\2\-\e\f\7\e\-\4\8\7\0\-\9\c\6\b\-\2\b\4\4\4\9\3\a\f\1\3\4 ]] 00:18:35.472 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:35.472 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:35.472 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:35.730 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3fb115e1-f174-43c5-8c87-b7da9fabf2e2 == \3\f\b\1\1\5\e\1\-\f\1\7\4\-\4\3\c\5\-\8\c\8\7\-\b\7\d\a\9\f\a\b\f\2\e\2 ]] 00:18:35.730 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 651981 00:18:35.730 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 651981 ']' 00:18:35.730 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 651981 00:18:35.730 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:35.730 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:35.730 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 651981 00:18:35.730 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:35.730 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:35.730 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 651981' 00:18:35.730 killing process with pid 651981 00:18:35.730 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 651981 00:18:35.730 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 651981 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:38.260 rmmod nvme_tcp 00:18:38.260 rmmod nvme_fabrics 00:18:38.260 rmmod nvme_keyring 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 650356 ']' 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 650356 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 650356 ']' 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 650356 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 650356 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 650356' 00:18:38.260 killing process with pid 650356 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 650356 00:18:38.260 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 650356 00:18:40.162 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:40.162 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:40.162 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:40.162 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.162 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:40.162 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.162 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.162 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.067 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:42.067 00:18:42.067 real 0m25.600s 00:18:42.067 user 0m34.620s 00:18:42.067 sys 0m4.295s 00:18:42.067 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:42.068 ************************************ 00:18:42.068 END TEST nvmf_ns_masking 00:18:42.068 ************************************ 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:42.068 ************************************ 00:18:42.068 START TEST nvmf_nvme_cli 00:18:42.068 ************************************ 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:42.068 * Looking for test storage... 00:18:42.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:18:42.068 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:43.972 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:43.972 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.972 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:43.973 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:43.973 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:43.973 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:44.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:18:44.231 00:18:44.231 --- 10.0.0.2 ping statistics --- 00:18:44.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.231 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:44.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:18:44.231 00:18:44.231 --- 10.0.0.1 ping statistics --- 00:18:44.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.231 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=654984 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 654984 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 654984 ']' 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.231 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:44.231 [2024-07-26 16:24:03.928610] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:44.231 [2024-07-26 16:24:03.928736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.489 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.489 [2024-07-26 16:24:04.104604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:44.748 [2024-07-26 16:24:04.392890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.748 [2024-07-26 16:24:04.392963] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.748 [2024-07-26 16:24:04.392990] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.748 [2024-07-26 16:24:04.393012] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.748 [2024-07-26 16:24:04.393033] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.748 [2024-07-26 16:24:04.393154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.748 [2024-07-26 16:24:04.393185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.748 [2024-07-26 16:24:04.393229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.748 [2024-07-26 16:24:04.393239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.313 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.313 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:18:45.313 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.313 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:45.313 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:45.313 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.313 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:45.313 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.313 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:45.313 [2024-07-26 16:24:05.021479] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.313 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.313 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:45.313 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.313 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:45.572 Malloc0 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:45.572 Malloc1 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:45.572 [2024-07-26 16:24:05.208238] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:45.572 00:18:45.572 Discovery Log Number of Records 2, Generation counter 2 00:18:45.572 =====Discovery Log Entry 0====== 00:18:45.572 trtype: tcp 00:18:45.572 adrfam: ipv4 00:18:45.572 subtype: current discovery subsystem 00:18:45.572 treq: not required 00:18:45.572 portid: 0 00:18:45.572 trsvcid: 4420 00:18:45.572 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:45.572 traddr: 10.0.0.2 00:18:45.572 eflags: explicit discovery connections, duplicate discovery information 00:18:45.572 sectype: none 00:18:45.572 =====Discovery Log Entry 1====== 00:18:45.572 trtype: tcp 00:18:45.572 adrfam: ipv4 00:18:45.572 subtype: nvme subsystem 00:18:45.572 treq: not required 00:18:45.572 portid: 0 00:18:45.572 trsvcid: 4420 00:18:45.572 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:45.572 traddr: 10.0.0.2 00:18:45.572 eflags: none 00:18:45.572 sectype: none 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:45.572 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:46.506 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:46.506 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.506 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:46.506 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:46.506 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:46.506 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:48.404 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:48.405 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:48.405 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:48.405 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:48.405 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:48.405 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:48.405 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:18:48.405 /dev/nvme0n1 ]] 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:48.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:48.405 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:48.663 rmmod nvme_tcp 00:18:48.663 rmmod nvme_fabrics 00:18:48.663 rmmod nvme_keyring 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 654984 ']' 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 654984 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 654984 ']' 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 654984 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 654984 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 654984' 00:18:48.663 killing process with pid 654984 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 654984 00:18:48.663 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 654984 00:18:50.592 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:50.592 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:50.592 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:50.592 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.592 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:50.592 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.592 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.592 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.496 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:52.496 00:18:52.496 real 0m10.218s 00:18:52.496 user 0m21.069s 00:18:52.496 sys 0m2.340s 00:18:52.496 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:52.496 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:52.496 ************************************ 00:18:52.496 END TEST nvmf_nvme_cli 00:18:52.496 ************************************ 00:18:52.496 16:24:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:52.496 16:24:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:52.496 16:24:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:52.496 16:24:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:52.496 16:24:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:52.496 ************************************ 00:18:52.496 START TEST nvmf_auth_target 00:18:52.496 ************************************ 00:18:52.497 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:52.497 * Looking for test storage... 00:18:52.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:52.497 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:54.397 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:54.397 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.397 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:54.398 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:54.398 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:54.398 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:54.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:18:54.398 00:18:54.398 --- 10.0.0.2 ping statistics --- 00:18:54.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.398 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:54.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:18:54.398 00:18:54.398 --- 10.0.0.1 ping statistics --- 00:18:54.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.398 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=658125 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 658125 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 658125 ']' 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:54.398 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=658279 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6b3bd79995feaa70acf072783bdb4a39acc03584f65ee832 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zaa 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6b3bd79995feaa70acf072783bdb4a39acc03584f65ee832 0 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6b3bd79995feaa70acf072783bdb4a39acc03584f65ee832 0 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6b3bd79995feaa70acf072783bdb4a39acc03584f65ee832 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zaa 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zaa 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.zaa 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ef3e7f7c2928d4bdd791f2bdf1ff1e22b0d37531e1af671de2d4e6a53862b467 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.To8 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ef3e7f7c2928d4bdd791f2bdf1ff1e22b0d37531e1af671de2d4e6a53862b467 3 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ef3e7f7c2928d4bdd791f2bdf1ff1e22b0d37531e1af671de2d4e6a53862b467 3 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ef3e7f7c2928d4bdd791f2bdf1ff1e22b0d37531e1af671de2d4e6a53862b467 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.To8 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.To8 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.To8 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=415245aff35b2c1fb666948fa190f796 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fgv 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 415245aff35b2c1fb666948fa190f796 1 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 415245aff35b2c1fb666948fa190f796 1 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=415245aff35b2c1fb666948fa190f796 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fgv 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fgv 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.fgv 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:55.774 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5cd2c51f41a6d0d67d0ddd53eedd80e5b270fc21a564d711 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0oU 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5cd2c51f41a6d0d67d0ddd53eedd80e5b270fc21a564d711 2 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5cd2c51f41a6d0d67d0ddd53eedd80e5b270fc21a564d711 2 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5cd2c51f41a6d0d67d0ddd53eedd80e5b270fc21a564d711 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0oU 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0oU 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.0oU 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b096af49b202eeb1304aacb4de244729ad037efc2aa1e682 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.hOP 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b096af49b202eeb1304aacb4de244729ad037efc2aa1e682 2 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b096af49b202eeb1304aacb4de244729ad037efc2aa1e682 2 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b096af49b202eeb1304aacb4de244729ad037efc2aa1e682 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.hOP 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.hOP 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.hOP 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0d594c444a98cfd213231ea5c4b148e0 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.At6 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0d594c444a98cfd213231ea5c4b148e0 1 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0d594c444a98cfd213231ea5c4b148e0 1 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0d594c444a98cfd213231ea5c4b148e0 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.At6 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.At6 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.At6 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=39e57b1cc414579d7c1f7b0326c52bfeb4bc9a9f931c5b20cdb3f41258353ff8 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.SIt 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 39e57b1cc414579d7c1f7b0326c52bfeb4bc9a9f931c5b20cdb3f41258353ff8 3 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 39e57b1cc414579d7c1f7b0326c52bfeb4bc9a9f931c5b20cdb3f41258353ff8 3 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=39e57b1cc414579d7c1f7b0326c52bfeb4bc9a9f931c5b20cdb3f41258353ff8 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.SIt 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.SIt 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.SIt 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 658125 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 658125 ']' 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:55.775 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.034 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:56.034 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:56.034 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 658279 /var/tmp/host.sock 00:18:56.034 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 658279 ']' 00:18:56.034 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:56.034 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.034 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:56.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:56.034 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.034 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.969 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:56.970 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:56.970 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:56.970 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.970 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.970 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.970 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:56.970 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zaa 00:18:56.970 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.970 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.970 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.970 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.zaa 00:18:56.970 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.zaa 00:18:57.228 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.To8 ]] 00:18:57.228 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.To8 00:18:57.228 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.228 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.228 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.228 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.To8 00:18:57.228 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.To8 00:18:57.486 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:57.486 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fgv 00:18:57.486 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.486 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.486 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.486 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.fgv 00:18:57.486 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.fgv 00:18:57.743 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.0oU ]] 00:18:57.743 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0oU 00:18:57.743 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.743 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.743 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.743 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0oU 00:18:57.743 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0oU 00:18:58.001 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:58.001 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.hOP 00:18:58.001 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.001 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.001 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.001 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.hOP 00:18:58.001 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.hOP 00:18:58.259 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.At6 ]] 00:18:58.259 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.At6 00:18:58.259 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.259 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.259 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.259 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.At6 00:18:58.259 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.At6 00:18:58.518 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:58.518 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SIt 00:18:58.518 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.518 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.518 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.518 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.SIt 00:18:58.518 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.SIt 00:18:58.776 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:58.776 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:58.776 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.776 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.776 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:58.776 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:59.034 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:59.034 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.034 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.034 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:59.034 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:59.034 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.034 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.034 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.034 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.034 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.034 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.034 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.600 00:18:59.600 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.600 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.600 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.600 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.600 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.600 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.600 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.600 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.600 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.600 { 00:18:59.600 "cntlid": 1, 00:18:59.600 "qid": 0, 00:18:59.600 "state": "enabled", 00:18:59.600 "thread": "nvmf_tgt_poll_group_000", 00:18:59.600 "listen_address": { 00:18:59.600 "trtype": "TCP", 00:18:59.600 "adrfam": "IPv4", 00:18:59.600 "traddr": "10.0.0.2", 00:18:59.600 "trsvcid": "4420" 00:18:59.600 }, 00:18:59.600 "peer_address": { 00:18:59.600 "trtype": "TCP", 00:18:59.600 "adrfam": "IPv4", 00:18:59.600 "traddr": "10.0.0.1", 00:18:59.600 "trsvcid": "47178" 00:18:59.600 }, 00:18:59.600 "auth": { 00:18:59.600 "state": "completed", 00:18:59.600 "digest": "sha256", 00:18:59.600 "dhgroup": "null" 00:18:59.600 } 00:18:59.600 } 00:18:59.600 ]' 00:18:59.600 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.858 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.858 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.858 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:59.858 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.858 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.858 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.858 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.116 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:19:01.050 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.050 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.050 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.050 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.050 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.050 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.050 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:01.050 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:01.308 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:01.308 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.308 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.308 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:01.308 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:01.308 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.308 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.308 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.308 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.308 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.308 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.308 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.566 00:19:01.566 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.566 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.566 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.824 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.824 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.824 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.824 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.824 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.824 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.824 { 00:19:01.824 "cntlid": 3, 00:19:01.824 "qid": 0, 00:19:01.824 "state": "enabled", 00:19:01.824 "thread": "nvmf_tgt_poll_group_000", 00:19:01.824 "listen_address": { 00:19:01.824 "trtype": "TCP", 00:19:01.824 "adrfam": "IPv4", 00:19:01.824 "traddr": "10.0.0.2", 00:19:01.824 "trsvcid": "4420" 00:19:01.824 }, 00:19:01.824 "peer_address": { 00:19:01.824 "trtype": "TCP", 00:19:01.824 "adrfam": "IPv4", 00:19:01.824 "traddr": "10.0.0.1", 00:19:01.824 "trsvcid": "43234" 00:19:01.824 }, 00:19:01.824 "auth": { 00:19:01.824 "state": "completed", 00:19:01.824 "digest": "sha256", 00:19:01.824 "dhgroup": "null" 00:19:01.824 } 00:19:01.824 } 00:19:01.824 ]' 00:19:01.824 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.824 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.824 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.082 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:02.082 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.082 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.082 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.082 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.340 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:19:03.273 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.273 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.273 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.273 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.273 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.273 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.273 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:03.273 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:03.531 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:03.531 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.531 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:03.531 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:03.531 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:03.531 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.531 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.531 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.531 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.531 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.531 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.531 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.788 00:19:03.788 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.788 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.788 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.045 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.045 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.046 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.046 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.046 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.046 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.046 { 00:19:04.046 "cntlid": 5, 00:19:04.046 "qid": 0, 00:19:04.046 "state": "enabled", 00:19:04.046 "thread": "nvmf_tgt_poll_group_000", 00:19:04.046 "listen_address": { 00:19:04.046 "trtype": "TCP", 00:19:04.046 "adrfam": "IPv4", 00:19:04.046 "traddr": "10.0.0.2", 00:19:04.046 "trsvcid": "4420" 00:19:04.046 }, 00:19:04.046 "peer_address": { 00:19:04.046 "trtype": "TCP", 00:19:04.046 "adrfam": "IPv4", 00:19:04.046 "traddr": "10.0.0.1", 00:19:04.046 "trsvcid": "43264" 00:19:04.046 }, 00:19:04.046 "auth": { 00:19:04.046 "state": "completed", 00:19:04.046 "digest": "sha256", 00:19:04.046 "dhgroup": "null" 00:19:04.046 } 00:19:04.046 } 00:19:04.046 ]' 00:19:04.046 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.046 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.046 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.046 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:04.046 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.046 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.046 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.046 16:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.303 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:19:05.236 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.236 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.236 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.236 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.236 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.236 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.236 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:05.236 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:05.494 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:05.494 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.494 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:05.494 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:05.494 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:05.494 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.494 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:05.494 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.494 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.494 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.494 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.494 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.778 00:19:05.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.037 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.037 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.037 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.037 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.037 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.037 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.037 { 00:19:06.037 "cntlid": 7, 00:19:06.037 "qid": 0, 00:19:06.037 "state": "enabled", 00:19:06.037 "thread": "nvmf_tgt_poll_group_000", 00:19:06.037 "listen_address": { 00:19:06.037 "trtype": "TCP", 00:19:06.037 "adrfam": "IPv4", 00:19:06.037 "traddr": "10.0.0.2", 00:19:06.037 "trsvcid": "4420" 00:19:06.037 }, 00:19:06.037 "peer_address": { 00:19:06.037 "trtype": "TCP", 00:19:06.037 "adrfam": "IPv4", 00:19:06.037 "traddr": "10.0.0.1", 00:19:06.037 "trsvcid": "43300" 00:19:06.037 }, 00:19:06.037 "auth": { 00:19:06.037 "state": "completed", 00:19:06.037 "digest": "sha256", 00:19:06.037 "dhgroup": "null" 00:19:06.037 } 00:19:06.037 } 00:19:06.037 ]' 00:19:06.037 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.295 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.295 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.295 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:06.295 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.295 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.295 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.295 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.553 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:19:07.486 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.486 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:07.486 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.486 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.486 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.486 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.486 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.486 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:07.486 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:07.744 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:07.744 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.744 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.744 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:07.744 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.744 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.744 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.744 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.744 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.744 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.744 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.744 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.002 00:19:08.002 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.002 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.002 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.260 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.260 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.260 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.260 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.260 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.260 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.260 { 00:19:08.260 "cntlid": 9, 00:19:08.260 "qid": 0, 00:19:08.260 "state": "enabled", 00:19:08.260 "thread": "nvmf_tgt_poll_group_000", 00:19:08.260 "listen_address": { 00:19:08.260 "trtype": "TCP", 00:19:08.260 "adrfam": "IPv4", 00:19:08.260 "traddr": "10.0.0.2", 00:19:08.260 "trsvcid": "4420" 00:19:08.260 }, 00:19:08.260 "peer_address": { 00:19:08.260 "trtype": "TCP", 00:19:08.260 "adrfam": "IPv4", 00:19:08.260 "traddr": "10.0.0.1", 00:19:08.260 "trsvcid": "43338" 00:19:08.260 }, 00:19:08.260 "auth": { 00:19:08.260 "state": "completed", 00:19:08.260 "digest": "sha256", 00:19:08.260 "dhgroup": "ffdhe2048" 00:19:08.260 } 00:19:08.260 } 00:19:08.260 ]' 00:19:08.260 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.517 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.517 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.517 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:08.517 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.517 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.517 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.517 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.776 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:19:09.714 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.714 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.714 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.714 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.714 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.714 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.714 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:09.714 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:09.971 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:09.971 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.971 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.971 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:09.971 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.971 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.971 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.972 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.972 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.972 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.972 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.972 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.228 00:19:10.228 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.228 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.228 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.486 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.486 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.486 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.486 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.486 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.486 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.486 { 00:19:10.486 "cntlid": 11, 00:19:10.486 "qid": 0, 00:19:10.486 "state": "enabled", 00:19:10.486 "thread": "nvmf_tgt_poll_group_000", 00:19:10.486 "listen_address": { 00:19:10.486 "trtype": "TCP", 00:19:10.486 "adrfam": "IPv4", 00:19:10.486 "traddr": "10.0.0.2", 00:19:10.486 "trsvcid": "4420" 00:19:10.486 }, 00:19:10.486 "peer_address": { 00:19:10.486 "trtype": "TCP", 00:19:10.486 "adrfam": "IPv4", 00:19:10.486 "traddr": "10.0.0.1", 00:19:10.486 "trsvcid": "59856" 00:19:10.486 }, 00:19:10.486 "auth": { 00:19:10.486 "state": "completed", 00:19:10.486 "digest": "sha256", 00:19:10.486 "dhgroup": "ffdhe2048" 00:19:10.486 } 00:19:10.486 } 00:19:10.486 ]' 00:19:10.486 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.744 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.744 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.744 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.744 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.744 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.744 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.744 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.002 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:19:11.937 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.937 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.937 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.937 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.937 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.937 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.937 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:11.937 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.195 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:12.195 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.195 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:12.195 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:12.195 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:12.195 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.195 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.195 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.195 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.195 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.195 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.195 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.761 00:19:12.761 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.761 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.761 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.761 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.761 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.761 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.761 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.019 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.019 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.019 { 00:19:13.019 "cntlid": 13, 00:19:13.019 "qid": 0, 00:19:13.019 "state": "enabled", 00:19:13.019 "thread": "nvmf_tgt_poll_group_000", 00:19:13.019 "listen_address": { 00:19:13.019 "trtype": "TCP", 00:19:13.019 "adrfam": "IPv4", 00:19:13.019 "traddr": "10.0.0.2", 00:19:13.019 "trsvcid": "4420" 00:19:13.019 }, 00:19:13.019 "peer_address": { 00:19:13.019 "trtype": "TCP", 00:19:13.019 "adrfam": "IPv4", 00:19:13.019 "traddr": "10.0.0.1", 00:19:13.019 "trsvcid": "59884" 00:19:13.019 }, 00:19:13.019 "auth": { 00:19:13.019 "state": "completed", 00:19:13.019 "digest": "sha256", 00:19:13.019 "dhgroup": "ffdhe2048" 00:19:13.019 } 00:19:13.019 } 00:19:13.019 ]' 00:19:13.019 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.019 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.019 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.019 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.019 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.019 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.019 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.019 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.276 16:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:19:14.211 16:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.211 16:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.211 16:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.211 16:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.211 16:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.211 16:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.211 16:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.211 16:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.469 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:14.469 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.469 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:14.469 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:14.469 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:14.469 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.469 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:14.469 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.469 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.469 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.469 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.469 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.727 00:19:14.985 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.985 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.985 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.242 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.242 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.242 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.242 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.242 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.242 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.242 { 00:19:15.242 "cntlid": 15, 00:19:15.242 "qid": 0, 00:19:15.242 "state": "enabled", 00:19:15.242 "thread": "nvmf_tgt_poll_group_000", 00:19:15.242 "listen_address": { 00:19:15.242 "trtype": "TCP", 00:19:15.242 "adrfam": "IPv4", 00:19:15.242 "traddr": "10.0.0.2", 00:19:15.242 "trsvcid": "4420" 00:19:15.242 }, 00:19:15.242 "peer_address": { 00:19:15.242 "trtype": "TCP", 00:19:15.242 "adrfam": "IPv4", 00:19:15.242 "traddr": "10.0.0.1", 00:19:15.242 "trsvcid": "59908" 00:19:15.242 }, 00:19:15.242 "auth": { 00:19:15.242 "state": "completed", 00:19:15.242 "digest": "sha256", 00:19:15.242 "dhgroup": "ffdhe2048" 00:19:15.242 } 00:19:15.242 } 00:19:15.242 ]' 00:19:15.242 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.242 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.242 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.242 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.242 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.242 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.242 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.242 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.501 16:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:19:16.436 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.436 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.436 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.436 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.436 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.436 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.436 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.436 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:16.436 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.002 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:17.002 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.002 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.002 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.002 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:17.002 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.002 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.002 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.002 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.002 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.002 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.003 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.261 00:19:17.261 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.261 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.261 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.519 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.519 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.519 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.519 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.519 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.519 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.519 { 00:19:17.519 "cntlid": 17, 00:19:17.519 "qid": 0, 00:19:17.519 "state": "enabled", 00:19:17.519 "thread": "nvmf_tgt_poll_group_000", 00:19:17.519 "listen_address": { 00:19:17.519 "trtype": "TCP", 00:19:17.519 "adrfam": "IPv4", 00:19:17.519 "traddr": "10.0.0.2", 00:19:17.519 "trsvcid": "4420" 00:19:17.519 }, 00:19:17.519 "peer_address": { 00:19:17.519 "trtype": "TCP", 00:19:17.519 "adrfam": "IPv4", 00:19:17.519 "traddr": "10.0.0.1", 00:19:17.519 "trsvcid": "59928" 00:19:17.519 }, 00:19:17.519 "auth": { 00:19:17.519 "state": "completed", 00:19:17.519 "digest": "sha256", 00:19:17.519 "dhgroup": "ffdhe3072" 00:19:17.519 } 00:19:17.519 } 00:19:17.519 ]' 00:19:17.519 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.519 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.519 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.519 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.519 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.519 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.519 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.519 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.777 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:19:18.714 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.714 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.714 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.714 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.714 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.714 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.714 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:18.714 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:18.972 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:18.972 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.972 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.972 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:18.972 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:18.972 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.972 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.972 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.973 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.973 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.973 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.973 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.542 00:19:19.542 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.542 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.542 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.542 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.542 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.542 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.542 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.542 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.542 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.542 { 00:19:19.542 "cntlid": 19, 00:19:19.542 "qid": 0, 00:19:19.542 "state": "enabled", 00:19:19.542 "thread": "nvmf_tgt_poll_group_000", 00:19:19.542 "listen_address": { 00:19:19.542 "trtype": "TCP", 00:19:19.542 "adrfam": "IPv4", 00:19:19.542 "traddr": "10.0.0.2", 00:19:19.542 "trsvcid": "4420" 00:19:19.542 }, 00:19:19.542 "peer_address": { 00:19:19.542 "trtype": "TCP", 00:19:19.542 "adrfam": "IPv4", 00:19:19.542 "traddr": "10.0.0.1", 00:19:19.542 "trsvcid": "59948" 00:19:19.542 }, 00:19:19.542 "auth": { 00:19:19.542 "state": "completed", 00:19:19.542 "digest": "sha256", 00:19:19.542 "dhgroup": "ffdhe3072" 00:19:19.542 } 00:19:19.542 } 00:19:19.542 ]' 00:19:19.542 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.800 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.800 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.800 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.800 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.800 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.800 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.800 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.060 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:19:21.030 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.030 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.030 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.030 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.030 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.030 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.030 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:21.030 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:21.288 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:21.288 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.288 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.288 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:21.288 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:21.288 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.288 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.288 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.288 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.288 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.288 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.288 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.547 00:19:21.547 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.547 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.547 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.805 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.805 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.805 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.805 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.805 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.805 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.805 { 00:19:21.805 "cntlid": 21, 00:19:21.805 "qid": 0, 00:19:21.805 "state": "enabled", 00:19:21.805 "thread": "nvmf_tgt_poll_group_000", 00:19:21.805 "listen_address": { 00:19:21.805 "trtype": "TCP", 00:19:21.805 "adrfam": "IPv4", 00:19:21.805 "traddr": "10.0.0.2", 00:19:21.805 "trsvcid": "4420" 00:19:21.805 }, 00:19:21.805 "peer_address": { 00:19:21.805 "trtype": "TCP", 00:19:21.805 "adrfam": "IPv4", 00:19:21.805 "traddr": "10.0.0.1", 00:19:21.805 "trsvcid": "45366" 00:19:21.805 }, 00:19:21.805 "auth": { 00:19:21.805 "state": "completed", 00:19:21.805 "digest": "sha256", 00:19:21.805 "dhgroup": "ffdhe3072" 00:19:21.805 } 00:19:21.805 } 00:19:21.805 ]' 00:19:21.805 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.805 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.805 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.063 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.063 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.063 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.063 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.063 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.321 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:19:23.256 16:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.256 16:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.256 16:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.256 16:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.257 16:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.257 16:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.257 16:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:23.257 16:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:23.514 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:23.514 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.514 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:23.514 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:23.514 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:23.514 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.514 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:23.514 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.514 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.514 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.514 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.514 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.772 00:19:23.772 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.772 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.772 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.029 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.029 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.029 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.029 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.029 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.029 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.029 { 00:19:24.029 "cntlid": 23, 00:19:24.029 "qid": 0, 00:19:24.029 "state": "enabled", 00:19:24.029 "thread": "nvmf_tgt_poll_group_000", 00:19:24.029 "listen_address": { 00:19:24.029 "trtype": "TCP", 00:19:24.029 "adrfam": "IPv4", 00:19:24.029 "traddr": "10.0.0.2", 00:19:24.029 "trsvcid": "4420" 00:19:24.029 }, 00:19:24.029 "peer_address": { 00:19:24.029 "trtype": "TCP", 00:19:24.030 "adrfam": "IPv4", 00:19:24.030 "traddr": "10.0.0.1", 00:19:24.030 "trsvcid": "45386" 00:19:24.030 }, 00:19:24.030 "auth": { 00:19:24.030 "state": "completed", 00:19:24.030 "digest": "sha256", 00:19:24.030 "dhgroup": "ffdhe3072" 00:19:24.030 } 00:19:24.030 } 00:19:24.030 ]' 00:19:24.030 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.030 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.030 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.030 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.030 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.030 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.030 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.030 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.288 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:19:25.224 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.224 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.224 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.224 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.224 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.224 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.224 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.224 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:25.224 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:25.482 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:25.482 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.482 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:25.482 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:25.482 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.482 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.482 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.482 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.482 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.482 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.482 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.482 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.049 00:19:26.049 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.049 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.049 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.308 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.308 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.308 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.308 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.308 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.308 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.308 { 00:19:26.308 "cntlid": 25, 00:19:26.308 "qid": 0, 00:19:26.308 "state": "enabled", 00:19:26.308 "thread": "nvmf_tgt_poll_group_000", 00:19:26.308 "listen_address": { 00:19:26.308 "trtype": "TCP", 00:19:26.308 "adrfam": "IPv4", 00:19:26.308 "traddr": "10.0.0.2", 00:19:26.308 "trsvcid": "4420" 00:19:26.308 }, 00:19:26.308 "peer_address": { 00:19:26.308 "trtype": "TCP", 00:19:26.308 "adrfam": "IPv4", 00:19:26.308 "traddr": "10.0.0.1", 00:19:26.308 "trsvcid": "45426" 00:19:26.308 }, 00:19:26.308 "auth": { 00:19:26.308 "state": "completed", 00:19:26.308 "digest": "sha256", 00:19:26.308 "dhgroup": "ffdhe4096" 00:19:26.308 } 00:19:26.308 } 00:19:26.308 ]' 00:19:26.308 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.308 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.308 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.308 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.308 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.308 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.308 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.308 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.567 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:19:27.503 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.503 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.503 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.503 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.503 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.503 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.503 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.503 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.761 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:27.761 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.761 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.761 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:27.761 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.761 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.761 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.761 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.761 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.761 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.761 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.761 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.329 00:19:28.329 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.329 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.329 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.587 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.587 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.587 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.587 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.587 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.587 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.587 { 00:19:28.587 "cntlid": 27, 00:19:28.587 "qid": 0, 00:19:28.587 "state": "enabled", 00:19:28.587 "thread": "nvmf_tgt_poll_group_000", 00:19:28.587 "listen_address": { 00:19:28.587 "trtype": "TCP", 00:19:28.587 "adrfam": "IPv4", 00:19:28.587 "traddr": "10.0.0.2", 00:19:28.587 "trsvcid": "4420" 00:19:28.587 }, 00:19:28.587 "peer_address": { 00:19:28.587 "trtype": "TCP", 00:19:28.587 "adrfam": "IPv4", 00:19:28.587 "traddr": "10.0.0.1", 00:19:28.587 "trsvcid": "45458" 00:19:28.587 }, 00:19:28.587 "auth": { 00:19:28.587 "state": "completed", 00:19:28.587 "digest": "sha256", 00:19:28.587 "dhgroup": "ffdhe4096" 00:19:28.587 } 00:19:28.587 } 00:19:28.587 ]' 00:19:28.587 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.587 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.587 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.587 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.587 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.587 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.587 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.587 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.845 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:19:29.779 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.779 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.779 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.779 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.779 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.779 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.779 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:29.779 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.039 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:30.039 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.039 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.039 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:30.039 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:30.039 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.039 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.039 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.039 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.039 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.039 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.297 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.555 00:19:30.555 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.555 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.555 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.813 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.813 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.813 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.813 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.813 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.813 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.813 { 00:19:30.813 "cntlid": 29, 00:19:30.813 "qid": 0, 00:19:30.813 "state": "enabled", 00:19:30.813 "thread": "nvmf_tgt_poll_group_000", 00:19:30.813 "listen_address": { 00:19:30.813 "trtype": "TCP", 00:19:30.813 "adrfam": "IPv4", 00:19:30.813 "traddr": "10.0.0.2", 00:19:30.813 "trsvcid": "4420" 00:19:30.813 }, 00:19:30.813 "peer_address": { 00:19:30.813 "trtype": "TCP", 00:19:30.813 "adrfam": "IPv4", 00:19:30.813 "traddr": "10.0.0.1", 00:19:30.813 "trsvcid": "49032" 00:19:30.813 }, 00:19:30.813 "auth": { 00:19:30.813 "state": "completed", 00:19:30.813 "digest": "sha256", 00:19:30.813 "dhgroup": "ffdhe4096" 00:19:30.813 } 00:19:30.813 } 00:19:30.813 ]' 00:19:30.813 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.813 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.813 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.071 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.071 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.071 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.071 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.071 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.329 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:19:32.265 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.265 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.265 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.265 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.265 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.265 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.265 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:32.265 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:32.523 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:32.523 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.523 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:32.523 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:32.523 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:32.524 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.524 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:32.524 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.524 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.524 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.524 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.524 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.089 00:19:33.090 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.090 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.090 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.348 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.348 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.348 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.348 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.348 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.348 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.348 { 00:19:33.348 "cntlid": 31, 00:19:33.348 "qid": 0, 00:19:33.348 "state": "enabled", 00:19:33.348 "thread": "nvmf_tgt_poll_group_000", 00:19:33.348 "listen_address": { 00:19:33.348 "trtype": "TCP", 00:19:33.348 "adrfam": "IPv4", 00:19:33.348 "traddr": "10.0.0.2", 00:19:33.348 "trsvcid": "4420" 00:19:33.348 }, 00:19:33.348 "peer_address": { 00:19:33.348 "trtype": "TCP", 00:19:33.348 "adrfam": "IPv4", 00:19:33.348 "traddr": "10.0.0.1", 00:19:33.348 "trsvcid": "49072" 00:19:33.348 }, 00:19:33.348 "auth": { 00:19:33.348 "state": "completed", 00:19:33.348 "digest": "sha256", 00:19:33.348 "dhgroup": "ffdhe4096" 00:19:33.348 } 00:19:33.348 } 00:19:33.348 ]' 00:19:33.348 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.348 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.348 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.348 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.348 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.348 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.348 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.348 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.613 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:19:34.556 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.556 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.556 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.556 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.556 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.556 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.556 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.556 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.556 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.815 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:34.815 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.815 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:34.815 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:34.815 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:34.815 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.815 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.815 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.815 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.815 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.815 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.815 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.381 00:19:35.381 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.381 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.381 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.640 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.640 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.640 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.640 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.640 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.640 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.640 { 00:19:35.640 "cntlid": 33, 00:19:35.640 "qid": 0, 00:19:35.640 "state": "enabled", 00:19:35.640 "thread": "nvmf_tgt_poll_group_000", 00:19:35.640 "listen_address": { 00:19:35.640 "trtype": "TCP", 00:19:35.640 "adrfam": "IPv4", 00:19:35.640 "traddr": "10.0.0.2", 00:19:35.640 "trsvcid": "4420" 00:19:35.640 }, 00:19:35.640 "peer_address": { 00:19:35.640 "trtype": "TCP", 00:19:35.640 "adrfam": "IPv4", 00:19:35.640 "traddr": "10.0.0.1", 00:19:35.640 "trsvcid": "49108" 00:19:35.640 }, 00:19:35.640 "auth": { 00:19:35.640 "state": "completed", 00:19:35.640 "digest": "sha256", 00:19:35.640 "dhgroup": "ffdhe6144" 00:19:35.640 } 00:19:35.640 } 00:19:35.640 ]' 00:19:35.640 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.640 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.640 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.932 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.932 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.932 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.932 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.932 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.190 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:19:37.125 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.125 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.125 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.125 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.125 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.125 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.125 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.125 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.383 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:37.383 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.383 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.383 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:37.383 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:37.383 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.383 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.383 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.383 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.383 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.383 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.383 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.950 00:19:37.950 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.950 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.950 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.207 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.207 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.207 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.207 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.207 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.207 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.207 { 00:19:38.207 "cntlid": 35, 00:19:38.207 "qid": 0, 00:19:38.207 "state": "enabled", 00:19:38.207 "thread": "nvmf_tgt_poll_group_000", 00:19:38.207 "listen_address": { 00:19:38.207 "trtype": "TCP", 00:19:38.207 "adrfam": "IPv4", 00:19:38.207 "traddr": "10.0.0.2", 00:19:38.207 "trsvcid": "4420" 00:19:38.207 }, 00:19:38.207 "peer_address": { 00:19:38.207 "trtype": "TCP", 00:19:38.207 "adrfam": "IPv4", 00:19:38.207 "traddr": "10.0.0.1", 00:19:38.207 "trsvcid": "49148" 00:19:38.207 }, 00:19:38.207 "auth": { 00:19:38.207 "state": "completed", 00:19:38.207 "digest": "sha256", 00:19:38.207 "dhgroup": "ffdhe6144" 00:19:38.207 } 00:19:38.207 } 00:19:38.207 ]' 00:19:38.207 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.207 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.207 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.207 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.207 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.208 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.208 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.208 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.465 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:19:39.400 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.400 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.400 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.400 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.400 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.400 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.400 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.400 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.657 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:39.657 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.657 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.657 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:39.657 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:39.657 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.657 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.657 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.657 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.657 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.657 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.657 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.224 00:19:40.224 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.224 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.224 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.482 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.482 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.482 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.482 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.482 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.482 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.482 { 00:19:40.482 "cntlid": 37, 00:19:40.482 "qid": 0, 00:19:40.482 "state": "enabled", 00:19:40.482 "thread": "nvmf_tgt_poll_group_000", 00:19:40.482 "listen_address": { 00:19:40.482 "trtype": "TCP", 00:19:40.482 "adrfam": "IPv4", 00:19:40.482 "traddr": "10.0.0.2", 00:19:40.482 "trsvcid": "4420" 00:19:40.482 }, 00:19:40.482 "peer_address": { 00:19:40.482 "trtype": "TCP", 00:19:40.482 "adrfam": "IPv4", 00:19:40.482 "traddr": "10.0.0.1", 00:19:40.482 "trsvcid": "35672" 00:19:40.482 }, 00:19:40.482 "auth": { 00:19:40.482 "state": "completed", 00:19:40.482 "digest": "sha256", 00:19:40.482 "dhgroup": "ffdhe6144" 00:19:40.482 } 00:19:40.482 } 00:19:40.482 ]' 00:19:40.482 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.739 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.740 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.740 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:40.740 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.740 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.740 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.740 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.997 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:19:41.935 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.935 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.935 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.935 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.935 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.935 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.935 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:41.935 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:42.192 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:42.192 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.192 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.192 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:42.192 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:42.192 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.192 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:42.192 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.192 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.192 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.192 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.192 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.758 00:19:42.758 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.758 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.758 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.016 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.016 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.016 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.016 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.016 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.016 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.016 { 00:19:43.016 "cntlid": 39, 00:19:43.016 "qid": 0, 00:19:43.016 "state": "enabled", 00:19:43.016 "thread": "nvmf_tgt_poll_group_000", 00:19:43.016 "listen_address": { 00:19:43.016 "trtype": "TCP", 00:19:43.016 "adrfam": "IPv4", 00:19:43.016 "traddr": "10.0.0.2", 00:19:43.016 "trsvcid": "4420" 00:19:43.016 }, 00:19:43.016 "peer_address": { 00:19:43.016 "trtype": "TCP", 00:19:43.016 "adrfam": "IPv4", 00:19:43.016 "traddr": "10.0.0.1", 00:19:43.016 "trsvcid": "35704" 00:19:43.016 }, 00:19:43.016 "auth": { 00:19:43.016 "state": "completed", 00:19:43.016 "digest": "sha256", 00:19:43.016 "dhgroup": "ffdhe6144" 00:19:43.016 } 00:19:43.016 } 00:19:43.016 ]' 00:19:43.016 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.016 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.016 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.017 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:43.017 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.276 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.276 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.276 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.536 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:19:44.495 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.495 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.495 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.495 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.495 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.495 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.495 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.495 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.495 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.495 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:44.495 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.495 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.495 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:44.495 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:44.495 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.495 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.495 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.495 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.753 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.753 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.753 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.690 00:19:45.690 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.690 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.690 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.690 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.690 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.690 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.690 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.947 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.947 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.947 { 00:19:45.947 "cntlid": 41, 00:19:45.947 "qid": 0, 00:19:45.947 "state": "enabled", 00:19:45.947 "thread": "nvmf_tgt_poll_group_000", 00:19:45.947 "listen_address": { 00:19:45.947 "trtype": "TCP", 00:19:45.947 "adrfam": "IPv4", 00:19:45.947 "traddr": "10.0.0.2", 00:19:45.947 "trsvcid": "4420" 00:19:45.947 }, 00:19:45.947 "peer_address": { 00:19:45.947 "trtype": "TCP", 00:19:45.947 "adrfam": "IPv4", 00:19:45.947 "traddr": "10.0.0.1", 00:19:45.947 "trsvcid": "35738" 00:19:45.947 }, 00:19:45.947 "auth": { 00:19:45.947 "state": "completed", 00:19:45.947 "digest": "sha256", 00:19:45.947 "dhgroup": "ffdhe8192" 00:19:45.947 } 00:19:45.947 } 00:19:45.947 ]' 00:19:45.947 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.947 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.947 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.947 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.947 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.947 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.947 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.947 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.205 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:19:47.142 16:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.142 16:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.142 16:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.142 16:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.142 16:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.142 16:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.142 16:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.142 16:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.400 16:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:47.400 16:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.400 16:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.400 16:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:47.400 16:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:47.400 16:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.401 16:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.401 16:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.401 16:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.401 16:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.401 16:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.401 16:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.338 00:19:48.338 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.338 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.338 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.596 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.596 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.596 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.596 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.596 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.596 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.596 { 00:19:48.596 "cntlid": 43, 00:19:48.596 "qid": 0, 00:19:48.596 "state": "enabled", 00:19:48.596 "thread": "nvmf_tgt_poll_group_000", 00:19:48.596 "listen_address": { 00:19:48.596 "trtype": "TCP", 00:19:48.596 "adrfam": "IPv4", 00:19:48.596 "traddr": "10.0.0.2", 00:19:48.596 "trsvcid": "4420" 00:19:48.596 }, 00:19:48.596 "peer_address": { 00:19:48.596 "trtype": "TCP", 00:19:48.596 "adrfam": "IPv4", 00:19:48.596 "traddr": "10.0.0.1", 00:19:48.596 "trsvcid": "35762" 00:19:48.596 }, 00:19:48.596 "auth": { 00:19:48.596 "state": "completed", 00:19:48.596 "digest": "sha256", 00:19:48.596 "dhgroup": "ffdhe8192" 00:19:48.596 } 00:19:48.596 } 00:19:48.596 ]' 00:19:48.596 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.596 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.596 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.854 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:48.854 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.854 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.854 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.854 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.112 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:19:50.049 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.049 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.049 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.049 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.049 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.049 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.049 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.049 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.307 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:50.307 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.307 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:50.307 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:50.307 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:50.307 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.307 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.307 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.307 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.307 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.307 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.307 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.265 00:19:51.265 16:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.265 16:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.265 16:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.537 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.537 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.537 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.537 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.537 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.537 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.537 { 00:19:51.537 "cntlid": 45, 00:19:51.537 "qid": 0, 00:19:51.537 "state": "enabled", 00:19:51.537 "thread": "nvmf_tgt_poll_group_000", 00:19:51.537 "listen_address": { 00:19:51.537 "trtype": "TCP", 00:19:51.537 "adrfam": "IPv4", 00:19:51.537 "traddr": "10.0.0.2", 00:19:51.537 "trsvcid": "4420" 00:19:51.537 }, 00:19:51.537 "peer_address": { 00:19:51.537 "trtype": "TCP", 00:19:51.537 "adrfam": "IPv4", 00:19:51.537 "traddr": "10.0.0.1", 00:19:51.537 "trsvcid": "53362" 00:19:51.537 }, 00:19:51.537 "auth": { 00:19:51.537 "state": "completed", 00:19:51.537 "digest": "sha256", 00:19:51.537 "dhgroup": "ffdhe8192" 00:19:51.537 } 00:19:51.537 } 00:19:51.537 ]' 00:19:51.537 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.537 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.537 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.537 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.537 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.537 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.537 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.537 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.795 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:19:52.730 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.730 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.730 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.730 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.730 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.730 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.730 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.730 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.988 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:52.988 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.988 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:52.988 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:52.988 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:52.988 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.988 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:52.988 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.988 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.988 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.988 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.988 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.925 00:19:53.925 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.925 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.925 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.183 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.183 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.183 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.183 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.183 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.183 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.183 { 00:19:54.183 "cntlid": 47, 00:19:54.183 "qid": 0, 00:19:54.183 "state": "enabled", 00:19:54.183 "thread": "nvmf_tgt_poll_group_000", 00:19:54.183 "listen_address": { 00:19:54.183 "trtype": "TCP", 00:19:54.183 "adrfam": "IPv4", 00:19:54.183 "traddr": "10.0.0.2", 00:19:54.183 "trsvcid": "4420" 00:19:54.183 }, 00:19:54.183 "peer_address": { 00:19:54.183 "trtype": "TCP", 00:19:54.183 "adrfam": "IPv4", 00:19:54.183 "traddr": "10.0.0.1", 00:19:54.183 "trsvcid": "53390" 00:19:54.183 }, 00:19:54.183 "auth": { 00:19:54.183 "state": "completed", 00:19:54.183 "digest": "sha256", 00:19:54.183 "dhgroup": "ffdhe8192" 00:19:54.183 } 00:19:54.183 } 00:19:54.183 ]' 00:19:54.183 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.183 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.183 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.183 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.183 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.441 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.441 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.441 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.441 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.819 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.077 00:19:56.077 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.077 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.077 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.335 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.335 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.335 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.335 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.335 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.335 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.335 { 00:19:56.335 "cntlid": 49, 00:19:56.335 "qid": 0, 00:19:56.335 "state": "enabled", 00:19:56.335 "thread": "nvmf_tgt_poll_group_000", 00:19:56.335 "listen_address": { 00:19:56.335 "trtype": "TCP", 00:19:56.335 "adrfam": "IPv4", 00:19:56.335 "traddr": "10.0.0.2", 00:19:56.335 "trsvcid": "4420" 00:19:56.335 }, 00:19:56.335 "peer_address": { 00:19:56.335 "trtype": "TCP", 00:19:56.335 "adrfam": "IPv4", 00:19:56.335 "traddr": "10.0.0.1", 00:19:56.335 "trsvcid": "53424" 00:19:56.335 }, 00:19:56.335 "auth": { 00:19:56.335 "state": "completed", 00:19:56.335 "digest": "sha384", 00:19:56.335 "dhgroup": "null" 00:19:56.335 } 00:19:56.335 } 00:19:56.335 ]' 00:19:56.336 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.336 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.336 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.594 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:56.594 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.594 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.594 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.594 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.852 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:19:57.788 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.788 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.788 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.788 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.788 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.788 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.788 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:57.788 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:58.045 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:58.045 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.045 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.045 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:58.045 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:58.045 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.045 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.046 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.046 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.046 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.046 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.046 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.303 00:19:58.303 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.303 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.303 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.561 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.561 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.561 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.561 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.561 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.561 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.561 { 00:19:58.561 "cntlid": 51, 00:19:58.561 "qid": 0, 00:19:58.561 "state": "enabled", 00:19:58.561 "thread": "nvmf_tgt_poll_group_000", 00:19:58.561 "listen_address": { 00:19:58.561 "trtype": "TCP", 00:19:58.561 "adrfam": "IPv4", 00:19:58.561 "traddr": "10.0.0.2", 00:19:58.561 "trsvcid": "4420" 00:19:58.561 }, 00:19:58.561 "peer_address": { 00:19:58.561 "trtype": "TCP", 00:19:58.561 "adrfam": "IPv4", 00:19:58.561 "traddr": "10.0.0.1", 00:19:58.561 "trsvcid": "53452" 00:19:58.561 }, 00:19:58.561 "auth": { 00:19:58.561 "state": "completed", 00:19:58.561 "digest": "sha384", 00:19:58.561 "dhgroup": "null" 00:19:58.561 } 00:19:58.561 } 00:19:58.561 ]' 00:19:58.561 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.561 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.561 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.819 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:58.819 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.819 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.819 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.819 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.077 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:20:00.011 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.011 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.011 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.011 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.011 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.011 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.011 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.011 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.269 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:00.269 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.269 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:00.269 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:00.269 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:00.269 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.269 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.269 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.269 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.269 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.269 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.269 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.527 00:20:00.527 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.527 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.527 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.784 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.784 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.784 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.784 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.784 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.784 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.784 { 00:20:00.784 "cntlid": 53, 00:20:00.784 "qid": 0, 00:20:00.784 "state": "enabled", 00:20:00.784 "thread": "nvmf_tgt_poll_group_000", 00:20:00.784 "listen_address": { 00:20:00.784 "trtype": "TCP", 00:20:00.784 "adrfam": "IPv4", 00:20:00.784 "traddr": "10.0.0.2", 00:20:00.784 "trsvcid": "4420" 00:20:00.784 }, 00:20:00.784 "peer_address": { 00:20:00.784 "trtype": "TCP", 00:20:00.784 "adrfam": "IPv4", 00:20:00.784 "traddr": "10.0.0.1", 00:20:00.784 "trsvcid": "57530" 00:20:00.784 }, 00:20:00.784 "auth": { 00:20:00.784 "state": "completed", 00:20:00.784 "digest": "sha384", 00:20:00.784 "dhgroup": "null" 00:20:00.784 } 00:20:00.784 } 00:20:00.784 ]' 00:20:00.785 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:01.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.300 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:20:02.237 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.237 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.237 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.237 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.237 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.237 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.238 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:02.238 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:02.495 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:02.495 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.495 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:02.495 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:02.495 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:02.495 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.495 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:02.495 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.496 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.496 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.496 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.496 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.753 00:20:02.753 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.753 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.753 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.012 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.012 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.012 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.012 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.012 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.012 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.012 { 00:20:03.012 "cntlid": 55, 00:20:03.012 "qid": 0, 00:20:03.012 "state": "enabled", 00:20:03.012 "thread": "nvmf_tgt_poll_group_000", 00:20:03.012 "listen_address": { 00:20:03.012 "trtype": "TCP", 00:20:03.012 "adrfam": "IPv4", 00:20:03.012 "traddr": "10.0.0.2", 00:20:03.012 "trsvcid": "4420" 00:20:03.012 }, 00:20:03.012 "peer_address": { 00:20:03.012 "trtype": "TCP", 00:20:03.012 "adrfam": "IPv4", 00:20:03.012 "traddr": "10.0.0.1", 00:20:03.012 "trsvcid": "57566" 00:20:03.012 }, 00:20:03.012 "auth": { 00:20:03.012 "state": "completed", 00:20:03.012 "digest": "sha384", 00:20:03.012 "dhgroup": "null" 00:20:03.012 } 00:20:03.012 } 00:20:03.012 ]' 00:20:03.012 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.012 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.012 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.271 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:03.271 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.271 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.271 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.271 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.530 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:20:04.465 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.465 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.465 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.465 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.465 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.465 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.465 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.465 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.465 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.724 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:04.724 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.724 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:04.724 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:04.724 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:04.724 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.724 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.724 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.724 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.724 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.724 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.724 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.982 00:20:04.982 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.982 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.982 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.240 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.240 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.240 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.240 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.240 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.240 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.240 { 00:20:05.240 "cntlid": 57, 00:20:05.240 "qid": 0, 00:20:05.240 "state": "enabled", 00:20:05.240 "thread": "nvmf_tgt_poll_group_000", 00:20:05.240 "listen_address": { 00:20:05.240 "trtype": "TCP", 00:20:05.240 "adrfam": "IPv4", 00:20:05.240 "traddr": "10.0.0.2", 00:20:05.240 "trsvcid": "4420" 00:20:05.240 }, 00:20:05.240 "peer_address": { 00:20:05.240 "trtype": "TCP", 00:20:05.240 "adrfam": "IPv4", 00:20:05.240 "traddr": "10.0.0.1", 00:20:05.240 "trsvcid": "57588" 00:20:05.240 }, 00:20:05.240 "auth": { 00:20:05.240 "state": "completed", 00:20:05.240 "digest": "sha384", 00:20:05.240 "dhgroup": "ffdhe2048" 00:20:05.240 } 00:20:05.240 } 00:20:05.240 ]' 00:20:05.240 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.240 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.240 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.240 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.240 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.499 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.499 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.499 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.758 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:20:06.724 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.724 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.724 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.724 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.724 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.724 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.724 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.724 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.982 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:06.982 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.982 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.982 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:06.982 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:06.982 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.982 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.982 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.982 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.982 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.982 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.982 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.240 00:20:07.240 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.240 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.240 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.498 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.498 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.498 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.498 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.498 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.498 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.498 { 00:20:07.498 "cntlid": 59, 00:20:07.498 "qid": 0, 00:20:07.498 "state": "enabled", 00:20:07.498 "thread": "nvmf_tgt_poll_group_000", 00:20:07.498 "listen_address": { 00:20:07.498 "trtype": "TCP", 00:20:07.498 "adrfam": "IPv4", 00:20:07.498 "traddr": "10.0.0.2", 00:20:07.498 "trsvcid": "4420" 00:20:07.498 }, 00:20:07.498 "peer_address": { 00:20:07.498 "trtype": "TCP", 00:20:07.498 "adrfam": "IPv4", 00:20:07.498 "traddr": "10.0.0.1", 00:20:07.498 "trsvcid": "57604" 00:20:07.498 }, 00:20:07.498 "auth": { 00:20:07.498 "state": "completed", 00:20:07.498 "digest": "sha384", 00:20:07.498 "dhgroup": "ffdhe2048" 00:20:07.498 } 00:20:07.498 } 00:20:07.498 ]' 00:20:07.498 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.498 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.498 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.755 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:07.755 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.755 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.755 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.755 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.013 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:20:08.951 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.951 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.951 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.951 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.951 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.951 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.951 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:08.951 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.209 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:09.209 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.209 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.209 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:09.209 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:09.209 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.209 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.209 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.209 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.209 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.209 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.209 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.467 00:20:09.467 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.467 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.467 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.724 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.724 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.724 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.724 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.724 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.724 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.724 { 00:20:09.724 "cntlid": 61, 00:20:09.724 "qid": 0, 00:20:09.724 "state": "enabled", 00:20:09.724 "thread": "nvmf_tgt_poll_group_000", 00:20:09.724 "listen_address": { 00:20:09.724 "trtype": "TCP", 00:20:09.724 "adrfam": "IPv4", 00:20:09.724 "traddr": "10.0.0.2", 00:20:09.724 "trsvcid": "4420" 00:20:09.724 }, 00:20:09.724 "peer_address": { 00:20:09.724 "trtype": "TCP", 00:20:09.724 "adrfam": "IPv4", 00:20:09.724 "traddr": "10.0.0.1", 00:20:09.724 "trsvcid": "57636" 00:20:09.724 }, 00:20:09.725 "auth": { 00:20:09.725 "state": "completed", 00:20:09.725 "digest": "sha384", 00:20:09.725 "dhgroup": "ffdhe2048" 00:20:09.725 } 00:20:09.725 } 00:20:09.725 ]' 00:20:09.725 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.982 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.982 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.982 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:09.982 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.982 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.982 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.982 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.241 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:20:11.177 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.177 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.177 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.177 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.177 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.177 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.177 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:11.177 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:11.435 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:11.435 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.435 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.435 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:11.435 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:11.435 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.435 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:11.435 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.435 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.435 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.435 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.436 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.694 00:20:11.694 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.694 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.694 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.952 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.952 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.952 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.952 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.952 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.952 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.952 { 00:20:11.952 "cntlid": 63, 00:20:11.952 "qid": 0, 00:20:11.952 "state": "enabled", 00:20:11.952 "thread": "nvmf_tgt_poll_group_000", 00:20:11.952 "listen_address": { 00:20:11.952 "trtype": "TCP", 00:20:11.952 "adrfam": "IPv4", 00:20:11.952 "traddr": "10.0.0.2", 00:20:11.952 "trsvcid": "4420" 00:20:11.952 }, 00:20:11.952 "peer_address": { 00:20:11.952 "trtype": "TCP", 00:20:11.952 "adrfam": "IPv4", 00:20:11.952 "traddr": "10.0.0.1", 00:20:11.952 "trsvcid": "34620" 00:20:11.952 }, 00:20:11.953 "auth": { 00:20:11.953 "state": "completed", 00:20:11.953 "digest": "sha384", 00:20:11.953 "dhgroup": "ffdhe2048" 00:20:11.953 } 00:20:11.953 } 00:20:11.953 ]' 00:20:11.953 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.211 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.211 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.211 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:12.211 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.211 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.211 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.211 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.469 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:20:13.404 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.404 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.404 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.404 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.404 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.404 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.404 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.404 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:13.404 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:13.662 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:13.662 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.662 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:13.662 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:13.662 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:13.662 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.662 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.662 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.662 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.662 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.662 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.662 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.920 00:20:13.920 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.920 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.920 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.178 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.178 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.178 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.178 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.178 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.178 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.178 { 00:20:14.178 "cntlid": 65, 00:20:14.178 "qid": 0, 00:20:14.178 "state": "enabled", 00:20:14.178 "thread": "nvmf_tgt_poll_group_000", 00:20:14.178 "listen_address": { 00:20:14.178 "trtype": "TCP", 00:20:14.178 "adrfam": "IPv4", 00:20:14.178 "traddr": "10.0.0.2", 00:20:14.178 "trsvcid": "4420" 00:20:14.178 }, 00:20:14.178 "peer_address": { 00:20:14.178 "trtype": "TCP", 00:20:14.178 "adrfam": "IPv4", 00:20:14.178 "traddr": "10.0.0.1", 00:20:14.178 "trsvcid": "34644" 00:20:14.178 }, 00:20:14.178 "auth": { 00:20:14.178 "state": "completed", 00:20:14.178 "digest": "sha384", 00:20:14.178 "dhgroup": "ffdhe3072" 00:20:14.178 } 00:20:14.178 } 00:20:14.178 ]' 00:20:14.178 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.436 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.436 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.436 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:14.436 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.436 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.436 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.436 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.694 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:20:15.628 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.628 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.628 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.628 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.628 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.628 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.628 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:15.628 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:15.887 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:15.887 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.887 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.887 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:15.887 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:15.887 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.887 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.887 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.887 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.887 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.887 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.887 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.146 00:20:16.146 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.146 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.146 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.404 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.404 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.404 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.404 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.404 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.404 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.404 { 00:20:16.404 "cntlid": 67, 00:20:16.404 "qid": 0, 00:20:16.404 "state": "enabled", 00:20:16.404 "thread": "nvmf_tgt_poll_group_000", 00:20:16.404 "listen_address": { 00:20:16.404 "trtype": "TCP", 00:20:16.404 "adrfam": "IPv4", 00:20:16.404 "traddr": "10.0.0.2", 00:20:16.404 "trsvcid": "4420" 00:20:16.404 }, 00:20:16.404 "peer_address": { 00:20:16.404 "trtype": "TCP", 00:20:16.404 "adrfam": "IPv4", 00:20:16.404 "traddr": "10.0.0.1", 00:20:16.404 "trsvcid": "34670" 00:20:16.404 }, 00:20:16.404 "auth": { 00:20:16.404 "state": "completed", 00:20:16.404 "digest": "sha384", 00:20:16.404 "dhgroup": "ffdhe3072" 00:20:16.404 } 00:20:16.404 } 00:20:16.404 ]' 00:20:16.404 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.404 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.404 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.663 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.663 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.663 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.663 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.663 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.921 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:20:17.855 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.855 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.855 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.855 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.855 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.855 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.855 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:17.855 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:18.113 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:18.113 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.113 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.113 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:18.113 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:18.113 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.113 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.113 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.113 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.113 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.113 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.113 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.680 00:20:18.680 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.680 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.680 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.680 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.680 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.680 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.680 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.680 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.680 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.680 { 00:20:18.680 "cntlid": 69, 00:20:18.680 "qid": 0, 00:20:18.680 "state": "enabled", 00:20:18.680 "thread": "nvmf_tgt_poll_group_000", 00:20:18.680 "listen_address": { 00:20:18.680 "trtype": "TCP", 00:20:18.680 "adrfam": "IPv4", 00:20:18.680 "traddr": "10.0.0.2", 00:20:18.680 "trsvcid": "4420" 00:20:18.680 }, 00:20:18.680 "peer_address": { 00:20:18.680 "trtype": "TCP", 00:20:18.680 "adrfam": "IPv4", 00:20:18.680 "traddr": "10.0.0.1", 00:20:18.680 "trsvcid": "34692" 00:20:18.680 }, 00:20:18.680 "auth": { 00:20:18.680 "state": "completed", 00:20:18.680 "digest": "sha384", 00:20:18.680 "dhgroup": "ffdhe3072" 00:20:18.680 } 00:20:18.680 } 00:20:18.680 ]' 00:20:18.680 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.938 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.938 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.938 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:18.938 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.938 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.938 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.938 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.196 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:20:20.130 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.130 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.130 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.130 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.130 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.130 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.130 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:20.130 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:20.388 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:20.388 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.388 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.388 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:20.388 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:20.388 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.388 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:20.388 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.388 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.388 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.388 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.388 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.646 00:20:20.646 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.646 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.646 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.904 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.904 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.904 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.904 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.904 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.904 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.904 { 00:20:20.904 "cntlid": 71, 00:20:20.904 "qid": 0, 00:20:20.904 "state": "enabled", 00:20:20.904 "thread": "nvmf_tgt_poll_group_000", 00:20:20.904 "listen_address": { 00:20:20.904 "trtype": "TCP", 00:20:20.904 "adrfam": "IPv4", 00:20:20.904 "traddr": "10.0.0.2", 00:20:20.904 "trsvcid": "4420" 00:20:20.904 }, 00:20:20.904 "peer_address": { 00:20:20.904 "trtype": "TCP", 00:20:20.904 "adrfam": "IPv4", 00:20:20.904 "traddr": "10.0.0.1", 00:20:20.904 "trsvcid": "49122" 00:20:20.904 }, 00:20:20.904 "auth": { 00:20:20.904 "state": "completed", 00:20:20.904 "digest": "sha384", 00:20:20.904 "dhgroup": "ffdhe3072" 00:20:20.904 } 00:20:20.904 } 00:20:20.904 ]' 00:20:20.904 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.162 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.162 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.162 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:21.162 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.162 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.162 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.162 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.451 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:20:22.384 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.384 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.384 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.384 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.384 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.384 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.384 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.384 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:22.384 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:22.642 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:22.642 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.642 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:22.642 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:22.642 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:22.642 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.642 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.642 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.642 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.642 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.642 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.642 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.206 00:20:23.206 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.206 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.206 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.464 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.464 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.464 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.464 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.464 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.464 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.464 { 00:20:23.464 "cntlid": 73, 00:20:23.464 "qid": 0, 00:20:23.464 "state": "enabled", 00:20:23.464 "thread": "nvmf_tgt_poll_group_000", 00:20:23.464 "listen_address": { 00:20:23.464 "trtype": "TCP", 00:20:23.464 "adrfam": "IPv4", 00:20:23.464 "traddr": "10.0.0.2", 00:20:23.464 "trsvcid": "4420" 00:20:23.464 }, 00:20:23.464 "peer_address": { 00:20:23.464 "trtype": "TCP", 00:20:23.464 "adrfam": "IPv4", 00:20:23.464 "traddr": "10.0.0.1", 00:20:23.464 "trsvcid": "49160" 00:20:23.464 }, 00:20:23.464 "auth": { 00:20:23.464 "state": "completed", 00:20:23.464 "digest": "sha384", 00:20:23.464 "dhgroup": "ffdhe4096" 00:20:23.464 } 00:20:23.464 } 00:20:23.464 ]' 00:20:23.464 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.464 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.464 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.464 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:23.464 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.464 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.464 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.464 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.730 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:20:24.662 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.662 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.662 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.662 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.662 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.662 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.662 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.662 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.920 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:24.920 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.920 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:24.920 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:24.920 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:24.920 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.920 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.920 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.920 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.920 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.920 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.920 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.486 00:20:25.486 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.486 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.486 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.744 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.744 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.744 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.744 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.744 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.744 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.744 { 00:20:25.744 "cntlid": 75, 00:20:25.744 "qid": 0, 00:20:25.744 "state": "enabled", 00:20:25.744 "thread": "nvmf_tgt_poll_group_000", 00:20:25.744 "listen_address": { 00:20:25.744 "trtype": "TCP", 00:20:25.744 "adrfam": "IPv4", 00:20:25.744 "traddr": "10.0.0.2", 00:20:25.744 "trsvcid": "4420" 00:20:25.744 }, 00:20:25.744 "peer_address": { 00:20:25.744 "trtype": "TCP", 00:20:25.744 "adrfam": "IPv4", 00:20:25.744 "traddr": "10.0.0.1", 00:20:25.744 "trsvcid": "49204" 00:20:25.744 }, 00:20:25.744 "auth": { 00:20:25.744 "state": "completed", 00:20:25.744 "digest": "sha384", 00:20:25.744 "dhgroup": "ffdhe4096" 00:20:25.744 } 00:20:25.744 } 00:20:25.744 ]' 00:20:25.744 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.744 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.744 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.744 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.745 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.745 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.745 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.745 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.002 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:20:26.935 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.935 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.935 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.935 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.935 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.935 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.935 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:26.935 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.193 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:27.193 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.193 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.193 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:27.193 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:27.193 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.193 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.193 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.193 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.193 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.193 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.193 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.758 00:20:27.758 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.758 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.758 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.017 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.017 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.017 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.017 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.017 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.017 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.017 { 00:20:28.017 "cntlid": 77, 00:20:28.017 "qid": 0, 00:20:28.017 "state": "enabled", 00:20:28.017 "thread": "nvmf_tgt_poll_group_000", 00:20:28.017 "listen_address": { 00:20:28.017 "trtype": "TCP", 00:20:28.017 "adrfam": "IPv4", 00:20:28.017 "traddr": "10.0.0.2", 00:20:28.017 "trsvcid": "4420" 00:20:28.017 }, 00:20:28.017 "peer_address": { 00:20:28.017 "trtype": "TCP", 00:20:28.017 "adrfam": "IPv4", 00:20:28.017 "traddr": "10.0.0.1", 00:20:28.017 "trsvcid": "49234" 00:20:28.017 }, 00:20:28.017 "auth": { 00:20:28.017 "state": "completed", 00:20:28.017 "digest": "sha384", 00:20:28.017 "dhgroup": "ffdhe4096" 00:20:28.017 } 00:20:28.017 } 00:20:28.017 ]' 00:20:28.017 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.017 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.017 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.017 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:28.017 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.017 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.017 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.017 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.275 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:20:29.209 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.209 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.209 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.209 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.209 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.209 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.209 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.209 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.468 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:29.468 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.468 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.468 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:29.468 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:29.468 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.468 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:29.468 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.468 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.468 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.468 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.468 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.034 00:20:30.034 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.034 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.034 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.034 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.034 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.034 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.034 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.293 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.293 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.293 { 00:20:30.293 "cntlid": 79, 00:20:30.293 "qid": 0, 00:20:30.293 "state": "enabled", 00:20:30.293 "thread": "nvmf_tgt_poll_group_000", 00:20:30.293 "listen_address": { 00:20:30.293 "trtype": "TCP", 00:20:30.293 "adrfam": "IPv4", 00:20:30.293 "traddr": "10.0.0.2", 00:20:30.293 "trsvcid": "4420" 00:20:30.293 }, 00:20:30.293 "peer_address": { 00:20:30.293 "trtype": "TCP", 00:20:30.293 "adrfam": "IPv4", 00:20:30.293 "traddr": "10.0.0.1", 00:20:30.293 "trsvcid": "51766" 00:20:30.293 }, 00:20:30.293 "auth": { 00:20:30.293 "state": "completed", 00:20:30.293 "digest": "sha384", 00:20:30.293 "dhgroup": "ffdhe4096" 00:20:30.293 } 00:20:30.293 } 00:20:30.293 ]' 00:20:30.293 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.293 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.293 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.293 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:30.293 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.293 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.293 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.293 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.551 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:20:31.485 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.485 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.485 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.485 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.485 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.485 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.485 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.485 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.485 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.744 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:31.744 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.744 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.744 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:31.744 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:31.744 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.744 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.744 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.744 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.744 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.744 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.744 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.310 00:20:32.568 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.568 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.568 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.826 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.826 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.826 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.826 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.826 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.826 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.826 { 00:20:32.826 "cntlid": 81, 00:20:32.826 "qid": 0, 00:20:32.826 "state": "enabled", 00:20:32.826 "thread": "nvmf_tgt_poll_group_000", 00:20:32.826 "listen_address": { 00:20:32.826 "trtype": "TCP", 00:20:32.826 "adrfam": "IPv4", 00:20:32.826 "traddr": "10.0.0.2", 00:20:32.826 "trsvcid": "4420" 00:20:32.826 }, 00:20:32.826 "peer_address": { 00:20:32.826 "trtype": "TCP", 00:20:32.826 "adrfam": "IPv4", 00:20:32.826 "traddr": "10.0.0.1", 00:20:32.826 "trsvcid": "51796" 00:20:32.826 }, 00:20:32.826 "auth": { 00:20:32.826 "state": "completed", 00:20:32.826 "digest": "sha384", 00:20:32.826 "dhgroup": "ffdhe6144" 00:20:32.826 } 00:20:32.826 } 00:20:32.826 ]' 00:20:32.826 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.826 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.826 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.826 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.826 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.826 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.826 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.826 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.084 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:20:34.023 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.023 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.023 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.023 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.023 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.023 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.023 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:34.023 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:34.281 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:34.281 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.281 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.281 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:34.281 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:34.281 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.281 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.281 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.281 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.281 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.281 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.281 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.846 00:20:35.103 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.103 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.103 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.361 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.361 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.361 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.361 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.361 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.361 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.361 { 00:20:35.361 "cntlid": 83, 00:20:35.361 "qid": 0, 00:20:35.361 "state": "enabled", 00:20:35.361 "thread": "nvmf_tgt_poll_group_000", 00:20:35.361 "listen_address": { 00:20:35.361 "trtype": "TCP", 00:20:35.361 "adrfam": "IPv4", 00:20:35.361 "traddr": "10.0.0.2", 00:20:35.361 "trsvcid": "4420" 00:20:35.361 }, 00:20:35.361 "peer_address": { 00:20:35.361 "trtype": "TCP", 00:20:35.361 "adrfam": "IPv4", 00:20:35.361 "traddr": "10.0.0.1", 00:20:35.361 "trsvcid": "51814" 00:20:35.361 }, 00:20:35.361 "auth": { 00:20:35.361 "state": "completed", 00:20:35.361 "digest": "sha384", 00:20:35.361 "dhgroup": "ffdhe6144" 00:20:35.361 } 00:20:35.361 } 00:20:35.361 ]' 00:20:35.361 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.361 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.361 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.361 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:35.361 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.361 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.361 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.361 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.619 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.029 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.596 00:20:37.596 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.596 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.596 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.854 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.854 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.854 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.854 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.854 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.854 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.854 { 00:20:37.854 "cntlid": 85, 00:20:37.854 "qid": 0, 00:20:37.854 "state": "enabled", 00:20:37.854 "thread": "nvmf_tgt_poll_group_000", 00:20:37.854 "listen_address": { 00:20:37.854 "trtype": "TCP", 00:20:37.854 "adrfam": "IPv4", 00:20:37.854 "traddr": "10.0.0.2", 00:20:37.854 "trsvcid": "4420" 00:20:37.854 }, 00:20:37.854 "peer_address": { 00:20:37.854 "trtype": "TCP", 00:20:37.855 "adrfam": "IPv4", 00:20:37.855 "traddr": "10.0.0.1", 00:20:37.855 "trsvcid": "51850" 00:20:37.855 }, 00:20:37.855 "auth": { 00:20:37.855 "state": "completed", 00:20:37.855 "digest": "sha384", 00:20:37.855 "dhgroup": "ffdhe6144" 00:20:37.855 } 00:20:37.855 } 00:20:37.855 ]' 00:20:37.855 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.855 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.855 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.855 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.855 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.112 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.112 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.112 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.370 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:20:39.304 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.304 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.304 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.304 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.304 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.304 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.304 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.304 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.562 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:39.562 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.562 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:39.562 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:39.562 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:39.562 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.562 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:39.562 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.562 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.562 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.562 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.562 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.128 00:20:40.128 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.128 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.128 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.387 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.387 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.387 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.387 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.387 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.387 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.387 { 00:20:40.387 "cntlid": 87, 00:20:40.387 "qid": 0, 00:20:40.387 "state": "enabled", 00:20:40.387 "thread": "nvmf_tgt_poll_group_000", 00:20:40.387 "listen_address": { 00:20:40.387 "trtype": "TCP", 00:20:40.387 "adrfam": "IPv4", 00:20:40.387 "traddr": "10.0.0.2", 00:20:40.387 "trsvcid": "4420" 00:20:40.387 }, 00:20:40.387 "peer_address": { 00:20:40.387 "trtype": "TCP", 00:20:40.387 "adrfam": "IPv4", 00:20:40.387 "traddr": "10.0.0.1", 00:20:40.387 "trsvcid": "56080" 00:20:40.387 }, 00:20:40.387 "auth": { 00:20:40.387 "state": "completed", 00:20:40.387 "digest": "sha384", 00:20:40.387 "dhgroup": "ffdhe6144" 00:20:40.387 } 00:20:40.387 } 00:20:40.387 ]' 00:20:40.387 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.387 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.387 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.387 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:40.387 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.387 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.387 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.387 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.645 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.016 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.950 00:20:42.950 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.950 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.950 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.208 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.208 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.208 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.208 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.208 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.208 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.208 { 00:20:43.208 "cntlid": 89, 00:20:43.208 "qid": 0, 00:20:43.208 "state": "enabled", 00:20:43.208 "thread": "nvmf_tgt_poll_group_000", 00:20:43.208 "listen_address": { 00:20:43.208 "trtype": "TCP", 00:20:43.208 "adrfam": "IPv4", 00:20:43.208 "traddr": "10.0.0.2", 00:20:43.208 "trsvcid": "4420" 00:20:43.208 }, 00:20:43.208 "peer_address": { 00:20:43.208 "trtype": "TCP", 00:20:43.208 "adrfam": "IPv4", 00:20:43.208 "traddr": "10.0.0.1", 00:20:43.208 "trsvcid": "56112" 00:20:43.208 }, 00:20:43.208 "auth": { 00:20:43.208 "state": "completed", 00:20:43.208 "digest": "sha384", 00:20:43.208 "dhgroup": "ffdhe8192" 00:20:43.208 } 00:20:43.208 } 00:20:43.208 ]' 00:20:43.208 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.208 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.208 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.466 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:43.466 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.466 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.466 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.466 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.723 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:20:44.655 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.655 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.655 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.655 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.655 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.655 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.655 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:44.655 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:44.913 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:44.913 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.913 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.913 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:44.913 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:44.913 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.913 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.913 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.913 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.913 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.913 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.913 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.846 00:20:45.846 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.846 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.846 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.104 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.104 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.104 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.104 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.104 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.104 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.104 { 00:20:46.104 "cntlid": 91, 00:20:46.104 "qid": 0, 00:20:46.104 "state": "enabled", 00:20:46.104 "thread": "nvmf_tgt_poll_group_000", 00:20:46.104 "listen_address": { 00:20:46.104 "trtype": "TCP", 00:20:46.104 "adrfam": "IPv4", 00:20:46.104 "traddr": "10.0.0.2", 00:20:46.104 "trsvcid": "4420" 00:20:46.104 }, 00:20:46.104 "peer_address": { 00:20:46.104 "trtype": "TCP", 00:20:46.104 "adrfam": "IPv4", 00:20:46.104 "traddr": "10.0.0.1", 00:20:46.104 "trsvcid": "56142" 00:20:46.104 }, 00:20:46.104 "auth": { 00:20:46.104 "state": "completed", 00:20:46.104 "digest": "sha384", 00:20:46.104 "dhgroup": "ffdhe8192" 00:20:46.104 } 00:20:46.104 } 00:20:46.104 ]' 00:20:46.104 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.104 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.104 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.104 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:46.104 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.104 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.104 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.104 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.362 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:20:47.295 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.295 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.295 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.295 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.295 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.295 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.295 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:47.295 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:47.553 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:47.553 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.553 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:47.553 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:47.553 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:47.553 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.553 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.553 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.553 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.553 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.553 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.553 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.485 00:20:48.485 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.485 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.485 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.743 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.743 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.743 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.743 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.743 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.743 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.743 { 00:20:48.743 "cntlid": 93, 00:20:48.743 "qid": 0, 00:20:48.743 "state": "enabled", 00:20:48.743 "thread": "nvmf_tgt_poll_group_000", 00:20:48.743 "listen_address": { 00:20:48.743 "trtype": "TCP", 00:20:48.743 "adrfam": "IPv4", 00:20:48.743 "traddr": "10.0.0.2", 00:20:48.743 "trsvcid": "4420" 00:20:48.743 }, 00:20:48.743 "peer_address": { 00:20:48.743 "trtype": "TCP", 00:20:48.743 "adrfam": "IPv4", 00:20:48.743 "traddr": "10.0.0.1", 00:20:48.743 "trsvcid": "56162" 00:20:48.743 }, 00:20:48.743 "auth": { 00:20:48.743 "state": "completed", 00:20:48.743 "digest": "sha384", 00:20:48.743 "dhgroup": "ffdhe8192" 00:20:48.743 } 00:20:48.743 } 00:20:48.743 ]' 00:20:48.743 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.743 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.743 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.743 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.743 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.002 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.002 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.002 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.259 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:20:50.192 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.192 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.192 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.192 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.192 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.192 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.192 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.192 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.450 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:50.450 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.450 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:50.450 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:50.450 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:50.450 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.450 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:50.450 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.450 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.450 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.450 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:50.450 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.384 00:20:51.384 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.384 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.384 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.642 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.642 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.642 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.642 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.642 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.642 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.642 { 00:20:51.642 "cntlid": 95, 00:20:51.642 "qid": 0, 00:20:51.642 "state": "enabled", 00:20:51.642 "thread": "nvmf_tgt_poll_group_000", 00:20:51.642 "listen_address": { 00:20:51.642 "trtype": "TCP", 00:20:51.642 "adrfam": "IPv4", 00:20:51.642 "traddr": "10.0.0.2", 00:20:51.642 "trsvcid": "4420" 00:20:51.642 }, 00:20:51.642 "peer_address": { 00:20:51.642 "trtype": "TCP", 00:20:51.642 "adrfam": "IPv4", 00:20:51.642 "traddr": "10.0.0.1", 00:20:51.642 "trsvcid": "44272" 00:20:51.642 }, 00:20:51.642 "auth": { 00:20:51.642 "state": "completed", 00:20:51.642 "digest": "sha384", 00:20:51.642 "dhgroup": "ffdhe8192" 00:20:51.642 } 00:20:51.642 } 00:20:51.642 ]' 00:20:51.642 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.642 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.642 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.642 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.642 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.643 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.643 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.643 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.900 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:20:52.867 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.125 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.125 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.125 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.125 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.125 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:53.125 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.125 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.125 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:53.125 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:53.383 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:53.383 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.383 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:53.383 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:53.383 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.383 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.383 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.383 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.383 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.383 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.383 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.383 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.640 00:20:53.640 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.640 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.640 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.897 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.897 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.897 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.897 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.897 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.897 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.897 { 00:20:53.897 "cntlid": 97, 00:20:53.897 "qid": 0, 00:20:53.897 "state": "enabled", 00:20:53.897 "thread": "nvmf_tgt_poll_group_000", 00:20:53.897 "listen_address": { 00:20:53.897 "trtype": "TCP", 00:20:53.897 "adrfam": "IPv4", 00:20:53.897 "traddr": "10.0.0.2", 00:20:53.897 "trsvcid": "4420" 00:20:53.897 }, 00:20:53.897 "peer_address": { 00:20:53.897 "trtype": "TCP", 00:20:53.897 "adrfam": "IPv4", 00:20:53.897 "traddr": "10.0.0.1", 00:20:53.897 "trsvcid": "44308" 00:20:53.897 }, 00:20:53.897 "auth": { 00:20:53.897 "state": "completed", 00:20:53.897 "digest": "sha512", 00:20:53.897 "dhgroup": "null" 00:20:53.897 } 00:20:53.897 } 00:20:53.897 ]' 00:20:53.897 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.897 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.897 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.897 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:53.897 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.155 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.155 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.155 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.412 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:20:55.345 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.345 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.345 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.345 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.345 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.345 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.345 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.345 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.603 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:55.603 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.603 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.603 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:55.603 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:55.603 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.603 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.603 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.603 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.603 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.603 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.603 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.860 00:20:55.860 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.860 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.860 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.118 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.118 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.118 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.118 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.118 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.118 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.118 { 00:20:56.118 "cntlid": 99, 00:20:56.118 "qid": 0, 00:20:56.118 "state": "enabled", 00:20:56.118 "thread": "nvmf_tgt_poll_group_000", 00:20:56.118 "listen_address": { 00:20:56.118 "trtype": "TCP", 00:20:56.118 "adrfam": "IPv4", 00:20:56.118 "traddr": "10.0.0.2", 00:20:56.118 "trsvcid": "4420" 00:20:56.118 }, 00:20:56.118 "peer_address": { 00:20:56.118 "trtype": "TCP", 00:20:56.118 "adrfam": "IPv4", 00:20:56.118 "traddr": "10.0.0.1", 00:20:56.118 "trsvcid": "44342" 00:20:56.118 }, 00:20:56.118 "auth": { 00:20:56.118 "state": "completed", 00:20:56.118 "digest": "sha512", 00:20:56.118 "dhgroup": "null" 00:20:56.118 } 00:20:56.118 } 00:20:56.118 ]' 00:20:56.118 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.118 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.118 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.118 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:56.118 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.118 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.118 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.118 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.376 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.749 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.007 00:20:58.007 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.007 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.007 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.265 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.265 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.265 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.265 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.265 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.265 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.265 { 00:20:58.265 "cntlid": 101, 00:20:58.265 "qid": 0, 00:20:58.265 "state": "enabled", 00:20:58.265 "thread": "nvmf_tgt_poll_group_000", 00:20:58.265 "listen_address": { 00:20:58.265 "trtype": "TCP", 00:20:58.265 "adrfam": "IPv4", 00:20:58.265 "traddr": "10.0.0.2", 00:20:58.265 "trsvcid": "4420" 00:20:58.265 }, 00:20:58.265 "peer_address": { 00:20:58.265 "trtype": "TCP", 00:20:58.265 "adrfam": "IPv4", 00:20:58.265 "traddr": "10.0.0.1", 00:20:58.265 "trsvcid": "44370" 00:20:58.265 }, 00:20:58.265 "auth": { 00:20:58.265 "state": "completed", 00:20:58.265 "digest": "sha512", 00:20:58.265 "dhgroup": "null" 00:20:58.265 } 00:20:58.265 } 00:20:58.265 ]' 00:20:58.265 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.265 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.265 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.523 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:58.523 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.523 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.523 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.523 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.781 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:20:59.714 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.714 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.714 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.714 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.714 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.714 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.714 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.714 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.972 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:59.972 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.972 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.972 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:59.972 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:59.972 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.972 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:59.972 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.972 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.972 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.972 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.972 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.229 00:21:00.229 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.229 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.229 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.486 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.486 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.487 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.487 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.487 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.487 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.487 { 00:21:00.487 "cntlid": 103, 00:21:00.487 "qid": 0, 00:21:00.487 "state": "enabled", 00:21:00.487 "thread": "nvmf_tgt_poll_group_000", 00:21:00.487 "listen_address": { 00:21:00.487 "trtype": "TCP", 00:21:00.487 "adrfam": "IPv4", 00:21:00.487 "traddr": "10.0.0.2", 00:21:00.487 "trsvcid": "4420" 00:21:00.487 }, 00:21:00.487 "peer_address": { 00:21:00.487 "trtype": "TCP", 00:21:00.487 "adrfam": "IPv4", 00:21:00.487 "traddr": "10.0.0.1", 00:21:00.487 "trsvcid": "33314" 00:21:00.487 }, 00:21:00.487 "auth": { 00:21:00.487 "state": "completed", 00:21:00.487 "digest": "sha512", 00:21:00.487 "dhgroup": "null" 00:21:00.487 } 00:21:00.487 } 00:21:00.487 ]' 00:21:00.487 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.487 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.487 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.744 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:00.744 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.744 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.744 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.744 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.001 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:21:01.933 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.933 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.933 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.933 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.933 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.933 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.933 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.933 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.933 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.190 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:02.190 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.190 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.190 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:02.190 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:02.190 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.190 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.190 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.190 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.190 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.190 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.190 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.448 00:21:02.448 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.448 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.448 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.706 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.706 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.706 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.706 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.963 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.963 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.963 { 00:21:02.963 "cntlid": 105, 00:21:02.963 "qid": 0, 00:21:02.963 "state": "enabled", 00:21:02.963 "thread": "nvmf_tgt_poll_group_000", 00:21:02.963 "listen_address": { 00:21:02.963 "trtype": "TCP", 00:21:02.963 "adrfam": "IPv4", 00:21:02.963 "traddr": "10.0.0.2", 00:21:02.963 "trsvcid": "4420" 00:21:02.963 }, 00:21:02.963 "peer_address": { 00:21:02.963 "trtype": "TCP", 00:21:02.963 "adrfam": "IPv4", 00:21:02.963 "traddr": "10.0.0.1", 00:21:02.963 "trsvcid": "33344" 00:21:02.963 }, 00:21:02.963 "auth": { 00:21:02.963 "state": "completed", 00:21:02.963 "digest": "sha512", 00:21:02.963 "dhgroup": "ffdhe2048" 00:21:02.963 } 00:21:02.963 } 00:21:02.963 ]' 00:21:02.963 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.963 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.963 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.963 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.963 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.963 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.963 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.963 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.220 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:21:04.150 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.150 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.150 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.150 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.150 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.150 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.150 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.150 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.407 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:04.407 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.407 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.407 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:04.408 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:04.408 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.408 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.408 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.408 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.408 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.408 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.408 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.666 00:21:04.666 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.666 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.666 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.925 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.925 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.925 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.925 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.925 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.925 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.925 { 00:21:04.925 "cntlid": 107, 00:21:04.925 "qid": 0, 00:21:04.925 "state": "enabled", 00:21:04.925 "thread": "nvmf_tgt_poll_group_000", 00:21:04.925 "listen_address": { 00:21:04.925 "trtype": "TCP", 00:21:04.925 "adrfam": "IPv4", 00:21:04.925 "traddr": "10.0.0.2", 00:21:04.925 "trsvcid": "4420" 00:21:04.925 }, 00:21:04.925 "peer_address": { 00:21:04.925 "trtype": "TCP", 00:21:04.925 "adrfam": "IPv4", 00:21:04.925 "traddr": "10.0.0.1", 00:21:04.925 "trsvcid": "33366" 00:21:04.925 }, 00:21:04.925 "auth": { 00:21:04.925 "state": "completed", 00:21:04.925 "digest": "sha512", 00:21:04.925 "dhgroup": "ffdhe2048" 00:21:04.925 } 00:21:04.925 } 00:21:04.925 ]' 00:21:04.925 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.183 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.183 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.183 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.183 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.183 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.183 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.183 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.440 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:21:06.373 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.373 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.373 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.373 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.373 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.373 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.373 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.373 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.631 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:06.631 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.631 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.631 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:06.631 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:06.631 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.631 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.631 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.631 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.631 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.631 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.631 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.889 00:21:06.889 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.889 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.889 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.147 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.147 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.147 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.147 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.147 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.147 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.147 { 00:21:07.147 "cntlid": 109, 00:21:07.147 "qid": 0, 00:21:07.147 "state": "enabled", 00:21:07.147 "thread": "nvmf_tgt_poll_group_000", 00:21:07.147 "listen_address": { 00:21:07.147 "trtype": "TCP", 00:21:07.147 "adrfam": "IPv4", 00:21:07.147 "traddr": "10.0.0.2", 00:21:07.147 "trsvcid": "4420" 00:21:07.147 }, 00:21:07.147 "peer_address": { 00:21:07.147 "trtype": "TCP", 00:21:07.147 "adrfam": "IPv4", 00:21:07.147 "traddr": "10.0.0.1", 00:21:07.147 "trsvcid": "33406" 00:21:07.147 }, 00:21:07.147 "auth": { 00:21:07.147 "state": "completed", 00:21:07.147 "digest": "sha512", 00:21:07.147 "dhgroup": "ffdhe2048" 00:21:07.147 } 00:21:07.147 } 00:21:07.147 ]' 00:21:07.147 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.147 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.147 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.409 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:07.409 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.409 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.409 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.409 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.707 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:21:08.640 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.640 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.640 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.640 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.640 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.640 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.640 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.640 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.897 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:08.897 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.897 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.897 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:08.897 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:08.897 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.897 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:08.897 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.897 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.897 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.897 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.897 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.155 00:21:09.155 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.155 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.155 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.413 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.413 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.413 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.413 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.413 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.413 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.413 { 00:21:09.413 "cntlid": 111, 00:21:09.413 "qid": 0, 00:21:09.413 "state": "enabled", 00:21:09.413 "thread": "nvmf_tgt_poll_group_000", 00:21:09.413 "listen_address": { 00:21:09.413 "trtype": "TCP", 00:21:09.413 "adrfam": "IPv4", 00:21:09.413 "traddr": "10.0.0.2", 00:21:09.413 "trsvcid": "4420" 00:21:09.413 }, 00:21:09.413 "peer_address": { 00:21:09.413 "trtype": "TCP", 00:21:09.413 "adrfam": "IPv4", 00:21:09.413 "traddr": "10.0.0.1", 00:21:09.413 "trsvcid": "33430" 00:21:09.413 }, 00:21:09.413 "auth": { 00:21:09.413 "state": "completed", 00:21:09.413 "digest": "sha512", 00:21:09.413 "dhgroup": "ffdhe2048" 00:21:09.413 } 00:21:09.413 } 00:21:09.413 ]' 00:21:09.413 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.413 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.413 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.413 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.413 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.413 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.413 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.413 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.671 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:21:10.605 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.605 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.605 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.605 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.605 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.605 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:10.605 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.605 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.605 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.863 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:10.863 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.863 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:10.863 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:10.863 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:10.863 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.863 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.863 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.863 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.863 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.863 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.863 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.430 00:21:11.430 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.430 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.430 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.430 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.430 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.430 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.430 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.430 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.430 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.430 { 00:21:11.430 "cntlid": 113, 00:21:11.430 "qid": 0, 00:21:11.430 "state": "enabled", 00:21:11.430 "thread": "nvmf_tgt_poll_group_000", 00:21:11.430 "listen_address": { 00:21:11.430 "trtype": "TCP", 00:21:11.430 "adrfam": "IPv4", 00:21:11.430 "traddr": "10.0.0.2", 00:21:11.430 "trsvcid": "4420" 00:21:11.430 }, 00:21:11.430 "peer_address": { 00:21:11.430 "trtype": "TCP", 00:21:11.430 "adrfam": "IPv4", 00:21:11.430 "traddr": "10.0.0.1", 00:21:11.430 "trsvcid": "59560" 00:21:11.430 }, 00:21:11.430 "auth": { 00:21:11.430 "state": "completed", 00:21:11.430 "digest": "sha512", 00:21:11.430 "dhgroup": "ffdhe3072" 00:21:11.430 } 00:21:11.430 } 00:21:11.430 ]' 00:21:11.430 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.688 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.688 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.688 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.688 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.688 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.688 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.688 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.947 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:21:12.880 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.880 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.880 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.880 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.880 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.880 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.880 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.880 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.139 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:13.139 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.139 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.139 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:13.139 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:13.139 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.139 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.139 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.139 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.139 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.139 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.139 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.704 00:21:13.704 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.704 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.704 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.961 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.961 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.961 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.961 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.961 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.961 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.961 { 00:21:13.961 "cntlid": 115, 00:21:13.961 "qid": 0, 00:21:13.961 "state": "enabled", 00:21:13.961 "thread": "nvmf_tgt_poll_group_000", 00:21:13.961 "listen_address": { 00:21:13.961 "trtype": "TCP", 00:21:13.961 "adrfam": "IPv4", 00:21:13.961 "traddr": "10.0.0.2", 00:21:13.961 "trsvcid": "4420" 00:21:13.961 }, 00:21:13.961 "peer_address": { 00:21:13.961 "trtype": "TCP", 00:21:13.961 "adrfam": "IPv4", 00:21:13.961 "traddr": "10.0.0.1", 00:21:13.961 "trsvcid": "59576" 00:21:13.961 }, 00:21:13.961 "auth": { 00:21:13.961 "state": "completed", 00:21:13.961 "digest": "sha512", 00:21:13.961 "dhgroup": "ffdhe3072" 00:21:13.961 } 00:21:13.961 } 00:21:13.961 ]' 00:21:13.961 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.961 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.961 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.961 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.961 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.961 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.961 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.961 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.219 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:21:15.152 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.152 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.152 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.152 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.152 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.152 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.152 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:15.153 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:15.410 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:15.410 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.410 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.410 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:15.410 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:15.410 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.410 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.410 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.410 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.668 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.668 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.668 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.926 00:21:15.926 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.926 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.926 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.184 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.184 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.184 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.184 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.184 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.184 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.184 { 00:21:16.184 "cntlid": 117, 00:21:16.184 "qid": 0, 00:21:16.184 "state": "enabled", 00:21:16.184 "thread": "nvmf_tgt_poll_group_000", 00:21:16.184 "listen_address": { 00:21:16.184 "trtype": "TCP", 00:21:16.184 "adrfam": "IPv4", 00:21:16.184 "traddr": "10.0.0.2", 00:21:16.184 "trsvcid": "4420" 00:21:16.184 }, 00:21:16.184 "peer_address": { 00:21:16.184 "trtype": "TCP", 00:21:16.184 "adrfam": "IPv4", 00:21:16.184 "traddr": "10.0.0.1", 00:21:16.184 "trsvcid": "59612" 00:21:16.184 }, 00:21:16.184 "auth": { 00:21:16.184 "state": "completed", 00:21:16.184 "digest": "sha512", 00:21:16.184 "dhgroup": "ffdhe3072" 00:21:16.184 } 00:21:16.184 } 00:21:16.184 ]' 00:21:16.184 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.184 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.184 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.184 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.184 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.184 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.184 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.184 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.751 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:21:17.686 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.686 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.686 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.686 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.686 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.686 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.686 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.686 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.944 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:17.944 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.944 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.944 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:17.944 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:17.944 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.944 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:17.944 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.944 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.944 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.944 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.944 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.202 00:21:18.202 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.202 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.202 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.461 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.461 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.461 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.461 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.461 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.461 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.461 { 00:21:18.461 "cntlid": 119, 00:21:18.461 "qid": 0, 00:21:18.461 "state": "enabled", 00:21:18.461 "thread": "nvmf_tgt_poll_group_000", 00:21:18.461 "listen_address": { 00:21:18.461 "trtype": "TCP", 00:21:18.461 "adrfam": "IPv4", 00:21:18.461 "traddr": "10.0.0.2", 00:21:18.461 "trsvcid": "4420" 00:21:18.461 }, 00:21:18.461 "peer_address": { 00:21:18.461 "trtype": "TCP", 00:21:18.461 "adrfam": "IPv4", 00:21:18.461 "traddr": "10.0.0.1", 00:21:18.461 "trsvcid": "59648" 00:21:18.461 }, 00:21:18.461 "auth": { 00:21:18.461 "state": "completed", 00:21:18.461 "digest": "sha512", 00:21:18.461 "dhgroup": "ffdhe3072" 00:21:18.461 } 00:21:18.461 } 00:21:18.461 ]' 00:21:18.461 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.461 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.461 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.461 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.461 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.719 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.719 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.719 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.979 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:21:19.912 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.912 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.912 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.912 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.912 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.912 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.912 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.912 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.912 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.170 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:20.170 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.170 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:20.170 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:20.170 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:20.170 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.170 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.170 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.170 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.170 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.170 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.170 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.736 00:21:20.736 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.736 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.736 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.736 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.736 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.736 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.736 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.736 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.736 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.736 { 00:21:20.736 "cntlid": 121, 00:21:20.736 "qid": 0, 00:21:20.736 "state": "enabled", 00:21:20.736 "thread": "nvmf_tgt_poll_group_000", 00:21:20.736 "listen_address": { 00:21:20.736 "trtype": "TCP", 00:21:20.736 "adrfam": "IPv4", 00:21:20.736 "traddr": "10.0.0.2", 00:21:20.736 "trsvcid": "4420" 00:21:20.736 }, 00:21:20.736 "peer_address": { 00:21:20.736 "trtype": "TCP", 00:21:20.736 "adrfam": "IPv4", 00:21:20.736 "traddr": "10.0.0.1", 00:21:20.736 "trsvcid": "54098" 00:21:20.736 }, 00:21:20.736 "auth": { 00:21:20.736 "state": "completed", 00:21:20.736 "digest": "sha512", 00:21:20.736 "dhgroup": "ffdhe4096" 00:21:20.736 } 00:21:20.736 } 00:21:20.736 ]' 00:21:20.736 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.994 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.994 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.994 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:20.994 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.994 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.994 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.994 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.252 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:21:22.185 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.185 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.185 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.185 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.185 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.185 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.185 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:22.185 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:22.443 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:22.443 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.443 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.443 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:22.443 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:22.443 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.443 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.443 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.443 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.443 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.443 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.443 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.042 00:21:23.042 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.042 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.042 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.300 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.300 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.300 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.300 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.300 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.300 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.300 { 00:21:23.300 "cntlid": 123, 00:21:23.300 "qid": 0, 00:21:23.300 "state": "enabled", 00:21:23.300 "thread": "nvmf_tgt_poll_group_000", 00:21:23.300 "listen_address": { 00:21:23.300 "trtype": "TCP", 00:21:23.300 "adrfam": "IPv4", 00:21:23.300 "traddr": "10.0.0.2", 00:21:23.300 "trsvcid": "4420" 00:21:23.300 }, 00:21:23.300 "peer_address": { 00:21:23.300 "trtype": "TCP", 00:21:23.300 "adrfam": "IPv4", 00:21:23.300 "traddr": "10.0.0.1", 00:21:23.300 "trsvcid": "54114" 00:21:23.300 }, 00:21:23.300 "auth": { 00:21:23.300 "state": "completed", 00:21:23.300 "digest": "sha512", 00:21:23.300 "dhgroup": "ffdhe4096" 00:21:23.300 } 00:21:23.300 } 00:21:23.300 ]' 00:21:23.300 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.300 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.300 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.300 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:23.300 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.300 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.300 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.300 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.559 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:21:24.491 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.491 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.491 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.491 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.491 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.491 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.491 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:24.491 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:24.748 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:24.748 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.748 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.748 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:24.748 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:24.748 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.748 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.748 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.748 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.748 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.748 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.748 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.312 00:21:25.312 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.312 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.313 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.570 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.570 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.570 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.570 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.570 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.570 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:25.570 { 00:21:25.570 "cntlid": 125, 00:21:25.570 "qid": 0, 00:21:25.570 "state": "enabled", 00:21:25.570 "thread": "nvmf_tgt_poll_group_000", 00:21:25.570 "listen_address": { 00:21:25.570 "trtype": "TCP", 00:21:25.570 "adrfam": "IPv4", 00:21:25.570 "traddr": "10.0.0.2", 00:21:25.570 "trsvcid": "4420" 00:21:25.570 }, 00:21:25.570 "peer_address": { 00:21:25.570 "trtype": "TCP", 00:21:25.570 "adrfam": "IPv4", 00:21:25.570 "traddr": "10.0.0.1", 00:21:25.570 "trsvcid": "54138" 00:21:25.570 }, 00:21:25.570 "auth": { 00:21:25.570 "state": "completed", 00:21:25.570 "digest": "sha512", 00:21:25.571 "dhgroup": "ffdhe4096" 00:21:25.571 } 00:21:25.571 } 00:21:25.571 ]' 00:21:25.571 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.571 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.571 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.571 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:25.571 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.571 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.571 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.571 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.828 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:21:26.761 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.761 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.761 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.761 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.761 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.761 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.761 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.761 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:27.019 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:27.019 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:27.019 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:27.019 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:27.020 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:27.020 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.020 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:27.020 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.020 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.020 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.020 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.020 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.584 00:21:27.584 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.584 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.584 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.842 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.842 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.842 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.842 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.842 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.842 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.842 { 00:21:27.842 "cntlid": 127, 00:21:27.842 "qid": 0, 00:21:27.842 "state": "enabled", 00:21:27.842 "thread": "nvmf_tgt_poll_group_000", 00:21:27.842 "listen_address": { 00:21:27.842 "trtype": "TCP", 00:21:27.842 "adrfam": "IPv4", 00:21:27.842 "traddr": "10.0.0.2", 00:21:27.842 "trsvcid": "4420" 00:21:27.842 }, 00:21:27.842 "peer_address": { 00:21:27.842 "trtype": "TCP", 00:21:27.842 "adrfam": "IPv4", 00:21:27.842 "traddr": "10.0.0.1", 00:21:27.842 "trsvcid": "54164" 00:21:27.842 }, 00:21:27.842 "auth": { 00:21:27.842 "state": "completed", 00:21:27.842 "digest": "sha512", 00:21:27.842 "dhgroup": "ffdhe4096" 00:21:27.842 } 00:21:27.842 } 00:21:27.842 ]' 00:21:27.842 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.842 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.842 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.842 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.842 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.842 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.842 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.842 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.099 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:21:29.470 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.470 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.470 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.470 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.470 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.470 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.470 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:29.470 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:29.470 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:29.470 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:29.470 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:29.470 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:29.470 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:29.470 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:29.470 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.470 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.470 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.470 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.470 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.471 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.471 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.034 00:21:30.034 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.034 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.034 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.292 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.292 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.292 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.292 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.292 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.292 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:30.292 { 00:21:30.292 "cntlid": 129, 00:21:30.292 "qid": 0, 00:21:30.292 "state": "enabled", 00:21:30.292 "thread": "nvmf_tgt_poll_group_000", 00:21:30.292 "listen_address": { 00:21:30.292 "trtype": "TCP", 00:21:30.292 "adrfam": "IPv4", 00:21:30.292 "traddr": "10.0.0.2", 00:21:30.292 "trsvcid": "4420" 00:21:30.292 }, 00:21:30.292 "peer_address": { 00:21:30.292 "trtype": "TCP", 00:21:30.292 "adrfam": "IPv4", 00:21:30.292 "traddr": "10.0.0.1", 00:21:30.292 "trsvcid": "36020" 00:21:30.292 }, 00:21:30.292 "auth": { 00:21:30.292 "state": "completed", 00:21:30.292 "digest": "sha512", 00:21:30.292 "dhgroup": "ffdhe6144" 00:21:30.292 } 00:21:30.292 } 00:21:30.292 ]' 00:21:30.292 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.292 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.292 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.292 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:30.292 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.549 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.549 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.549 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.807 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:21:31.740 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.740 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.740 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.740 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.740 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.740 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.740 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:31.740 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:31.998 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:31.998 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.998 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.998 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:31.998 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:31.998 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.998 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.998 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.998 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.998 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.998 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.998 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.563 00:21:32.563 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.563 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.563 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.821 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.821 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.821 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.821 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.821 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.821 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.821 { 00:21:32.821 "cntlid": 131, 00:21:32.821 "qid": 0, 00:21:32.821 "state": "enabled", 00:21:32.821 "thread": "nvmf_tgt_poll_group_000", 00:21:32.821 "listen_address": { 00:21:32.821 "trtype": "TCP", 00:21:32.821 "adrfam": "IPv4", 00:21:32.821 "traddr": "10.0.0.2", 00:21:32.821 "trsvcid": "4420" 00:21:32.821 }, 00:21:32.821 "peer_address": { 00:21:32.821 "trtype": "TCP", 00:21:32.821 "adrfam": "IPv4", 00:21:32.821 "traddr": "10.0.0.1", 00:21:32.821 "trsvcid": "36054" 00:21:32.821 }, 00:21:32.821 "auth": { 00:21:32.821 "state": "completed", 00:21:32.821 "digest": "sha512", 00:21:32.821 "dhgroup": "ffdhe6144" 00:21:32.821 } 00:21:32.821 } 00:21:32.821 ]' 00:21:32.821 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.821 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.821 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.079 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:33.079 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.079 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.079 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.079 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.336 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:21:34.269 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.269 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.269 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.269 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.269 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.269 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.269 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:34.269 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:34.527 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:34.527 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.527 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:34.527 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:34.527 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:34.527 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.527 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.527 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.527 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.527 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.527 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.527 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.093 00:21:35.093 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.093 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.093 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.351 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.351 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.351 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.351 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.351 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.351 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.351 { 00:21:35.351 "cntlid": 133, 00:21:35.351 "qid": 0, 00:21:35.351 "state": "enabled", 00:21:35.351 "thread": "nvmf_tgt_poll_group_000", 00:21:35.351 "listen_address": { 00:21:35.351 "trtype": "TCP", 00:21:35.351 "adrfam": "IPv4", 00:21:35.351 "traddr": "10.0.0.2", 00:21:35.351 "trsvcid": "4420" 00:21:35.351 }, 00:21:35.351 "peer_address": { 00:21:35.351 "trtype": "TCP", 00:21:35.351 "adrfam": "IPv4", 00:21:35.351 "traddr": "10.0.0.1", 00:21:35.351 "trsvcid": "36092" 00:21:35.351 }, 00:21:35.351 "auth": { 00:21:35.351 "state": "completed", 00:21:35.351 "digest": "sha512", 00:21:35.351 "dhgroup": "ffdhe6144" 00:21:35.351 } 00:21:35.351 } 00:21:35.351 ]' 00:21:35.351 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.351 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.351 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.351 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:35.351 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.351 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.351 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.351 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.609 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:21:36.573 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.573 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.573 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.573 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.573 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.573 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.573 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.573 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.832 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:36.832 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.832 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.832 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:36.832 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:36.832 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.832 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:36.832 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.832 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.832 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.832 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.832 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:37.406 00:21:37.406 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.406 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.406 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.664 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.664 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.664 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.664 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.664 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.664 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.664 { 00:21:37.664 "cntlid": 135, 00:21:37.664 "qid": 0, 00:21:37.664 "state": "enabled", 00:21:37.664 "thread": "nvmf_tgt_poll_group_000", 00:21:37.664 "listen_address": { 00:21:37.664 "trtype": "TCP", 00:21:37.664 "adrfam": "IPv4", 00:21:37.664 "traddr": "10.0.0.2", 00:21:37.664 "trsvcid": "4420" 00:21:37.664 }, 00:21:37.664 "peer_address": { 00:21:37.664 "trtype": "TCP", 00:21:37.664 "adrfam": "IPv4", 00:21:37.664 "traddr": "10.0.0.1", 00:21:37.664 "trsvcid": "36114" 00:21:37.664 }, 00:21:37.664 "auth": { 00:21:37.664 "state": "completed", 00:21:37.664 "digest": "sha512", 00:21:37.664 "dhgroup": "ffdhe6144" 00:21:37.664 } 00:21:37.664 } 00:21:37.664 ]' 00:21:37.664 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.664 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.664 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.664 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:37.664 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.922 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.922 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.922 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.180 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:21:39.118 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.118 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.118 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.118 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.118 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.118 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:39.118 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.118 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:39.118 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:39.377 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:39.377 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.377 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:39.377 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:39.377 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:39.377 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.377 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.377 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.377 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.377 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.377 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.377 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.316 00:21:40.316 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.316 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.316 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.573 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.573 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.574 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.574 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.574 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.574 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.574 { 00:21:40.574 "cntlid": 137, 00:21:40.574 "qid": 0, 00:21:40.574 "state": "enabled", 00:21:40.574 "thread": "nvmf_tgt_poll_group_000", 00:21:40.574 "listen_address": { 00:21:40.574 "trtype": "TCP", 00:21:40.574 "adrfam": "IPv4", 00:21:40.574 "traddr": "10.0.0.2", 00:21:40.574 "trsvcid": "4420" 00:21:40.574 }, 00:21:40.574 "peer_address": { 00:21:40.574 "trtype": "TCP", 00:21:40.574 "adrfam": "IPv4", 00:21:40.574 "traddr": "10.0.0.1", 00:21:40.574 "trsvcid": "60528" 00:21:40.574 }, 00:21:40.574 "auth": { 00:21:40.574 "state": "completed", 00:21:40.574 "digest": "sha512", 00:21:40.574 "dhgroup": "ffdhe8192" 00:21:40.574 } 00:21:40.574 } 00:21:40.574 ]' 00:21:40.574 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.574 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.574 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.574 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:40.574 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.574 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.574 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.574 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.832 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:21:41.769 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.769 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.769 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.769 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.029 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.029 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.029 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.029 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.288 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:42.288 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.288 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.288 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:42.288 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:42.288 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.288 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.288 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.288 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.288 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.288 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.288 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.221 00:21:43.221 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.221 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.221 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.221 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.221 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.221 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.221 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.221 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.221 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.221 { 00:21:43.221 "cntlid": 139, 00:21:43.221 "qid": 0, 00:21:43.221 "state": "enabled", 00:21:43.221 "thread": "nvmf_tgt_poll_group_000", 00:21:43.221 "listen_address": { 00:21:43.221 "trtype": "TCP", 00:21:43.221 "adrfam": "IPv4", 00:21:43.221 "traddr": "10.0.0.2", 00:21:43.221 "trsvcid": "4420" 00:21:43.221 }, 00:21:43.221 "peer_address": { 00:21:43.221 "trtype": "TCP", 00:21:43.221 "adrfam": "IPv4", 00:21:43.221 "traddr": "10.0.0.1", 00:21:43.221 "trsvcid": "60554" 00:21:43.221 }, 00:21:43.221 "auth": { 00:21:43.221 "state": "completed", 00:21:43.221 "digest": "sha512", 00:21:43.221 "dhgroup": "ffdhe8192" 00:21:43.221 } 00:21:43.221 } 00:21:43.221 ]' 00:21:43.221 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.221 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.221 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.479 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:43.479 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.479 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.479 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.479 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.736 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDE1MjQ1YWZmMzViMmMxZmI2NjY5NDhmYTE5MGY3OTbCgy6n: --dhchap-ctrl-secret DHHC-1:02:NWNkMmM1MWY0MWE2ZDBkNjdkMGRkZDUzZWVkZDgwZTViMjcwZmMyMWE1NjRkNzExeIF+8w==: 00:21:44.673 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.673 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.673 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.673 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.673 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.673 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.673 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:44.673 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:44.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:44.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:44.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:44.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.868 00:21:45.868 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.868 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.868 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.126 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.126 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.126 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.126 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.126 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.126 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.126 { 00:21:46.126 "cntlid": 141, 00:21:46.126 "qid": 0, 00:21:46.126 "state": "enabled", 00:21:46.126 "thread": "nvmf_tgt_poll_group_000", 00:21:46.126 "listen_address": { 00:21:46.126 "trtype": "TCP", 00:21:46.126 "adrfam": "IPv4", 00:21:46.126 "traddr": "10.0.0.2", 00:21:46.126 "trsvcid": "4420" 00:21:46.126 }, 00:21:46.126 "peer_address": { 00:21:46.126 "trtype": "TCP", 00:21:46.126 "adrfam": "IPv4", 00:21:46.126 "traddr": "10.0.0.1", 00:21:46.126 "trsvcid": "60580" 00:21:46.126 }, 00:21:46.126 "auth": { 00:21:46.126 "state": "completed", 00:21:46.126 "digest": "sha512", 00:21:46.126 "dhgroup": "ffdhe8192" 00:21:46.126 } 00:21:46.126 } 00:21:46.126 ]' 00:21:46.126 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.127 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.127 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.127 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:46.127 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.385 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.385 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.385 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.643 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YjA5NmFmNDliMjAyZWViMTMwNGFhY2I0ZGUyNDQ3MjlhZDAzN2VmYzJhYTFlNjgy5hHQCg==: --dhchap-ctrl-secret DHHC-1:01:MGQ1OTRjNDQ0YTk4Y2ZkMjEzMjMxZWE1YzRiMTQ4ZTA1WqAV: 00:21:47.582 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.582 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.582 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.582 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.582 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.582 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.582 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:47.582 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:47.841 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:47.841 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.841 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.841 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:47.841 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:47.841 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.841 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:47.841 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.841 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.841 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.841 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:47.841 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.778 00:21:48.778 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.778 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.778 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.036 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.036 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.036 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.036 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.036 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.036 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.036 { 00:21:49.036 "cntlid": 143, 00:21:49.036 "qid": 0, 00:21:49.036 "state": "enabled", 00:21:49.036 "thread": "nvmf_tgt_poll_group_000", 00:21:49.036 "listen_address": { 00:21:49.036 "trtype": "TCP", 00:21:49.036 "adrfam": "IPv4", 00:21:49.036 "traddr": "10.0.0.2", 00:21:49.036 "trsvcid": "4420" 00:21:49.036 }, 00:21:49.036 "peer_address": { 00:21:49.036 "trtype": "TCP", 00:21:49.036 "adrfam": "IPv4", 00:21:49.036 "traddr": "10.0.0.1", 00:21:49.036 "trsvcid": "60602" 00:21:49.036 }, 00:21:49.036 "auth": { 00:21:49.036 "state": "completed", 00:21:49.036 "digest": "sha512", 00:21:49.036 "dhgroup": "ffdhe8192" 00:21:49.036 } 00:21:49.037 } 00:21:49.037 ]' 00:21:49.037 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.037 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.037 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.037 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.037 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.037 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.037 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.037 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.295 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:21:50.231 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.231 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.231 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.231 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.493 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.493 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:50.493 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:50.493 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:50.493 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:50.493 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:50.493 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:50.493 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:50.493 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.493 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.493 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:50.493 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:50.762 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.762 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.762 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.762 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.762 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.762 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.762 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.714 00:21:51.714 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.714 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.714 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.714 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.714 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.714 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.714 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.714 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.714 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.714 { 00:21:51.714 "cntlid": 145, 00:21:51.714 "qid": 0, 00:21:51.714 "state": "enabled", 00:21:51.714 "thread": "nvmf_tgt_poll_group_000", 00:21:51.714 "listen_address": { 00:21:51.714 "trtype": "TCP", 00:21:51.714 "adrfam": "IPv4", 00:21:51.714 "traddr": "10.0.0.2", 00:21:51.714 "trsvcid": "4420" 00:21:51.714 }, 00:21:51.714 "peer_address": { 00:21:51.714 "trtype": "TCP", 00:21:51.714 "adrfam": "IPv4", 00:21:51.714 "traddr": "10.0.0.1", 00:21:51.714 "trsvcid": "41330" 00:21:51.714 }, 00:21:51.714 "auth": { 00:21:51.714 "state": "completed", 00:21:51.714 "digest": "sha512", 00:21:51.714 "dhgroup": "ffdhe8192" 00:21:51.714 } 00:21:51.714 } 00:21:51.714 ]' 00:21:51.714 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.714 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.714 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.972 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:51.972 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.972 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.972 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.972 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.229 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NmIzYmQ3OTk5NWZlYWE3MGFjZjA3Mjc4M2JkYjRhMzlhY2MwMzU4NGY2NWVlODMycseK5g==: --dhchap-ctrl-secret DHHC-1:03:ZWYzZTdmN2MyOTI4ZDRiZGQ3OTFmMmJkZjFmZjFlMjJiMGQzNzUzMWUxYWY2NzFkZTJkNGU2YTUzODYyYjQ2N+bo6Mk=: 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:53.165 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:54.102 request: 00:21:54.102 { 00:21:54.102 "name": "nvme0", 00:21:54.102 "trtype": "tcp", 00:21:54.102 "traddr": "10.0.0.2", 00:21:54.102 "adrfam": "ipv4", 00:21:54.102 "trsvcid": "4420", 00:21:54.102 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:54.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:54.102 "prchk_reftag": false, 00:21:54.102 "prchk_guard": false, 00:21:54.102 "hdgst": false, 00:21:54.102 "ddgst": false, 00:21:54.102 "dhchap_key": "key2", 00:21:54.102 "method": "bdev_nvme_attach_controller", 00:21:54.102 "req_id": 1 00:21:54.102 } 00:21:54.102 Got JSON-RPC error response 00:21:54.102 response: 00:21:54.102 { 00:21:54.102 "code": -5, 00:21:54.102 "message": "Input/output error" 00:21:54.102 } 00:21:54.102 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:54.102 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:54.102 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:54.102 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:54.102 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.102 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.102 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.102 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.102 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.102 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.102 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.102 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.102 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:54.102 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:54.102 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:54.103 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:54.103 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:54.103 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:54.103 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:54.103 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:54.103 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:55.040 request: 00:21:55.040 { 00:21:55.040 "name": "nvme0", 00:21:55.040 "trtype": "tcp", 00:21:55.040 "traddr": "10.0.0.2", 00:21:55.040 "adrfam": "ipv4", 00:21:55.040 "trsvcid": "4420", 00:21:55.040 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:55.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.040 "prchk_reftag": false, 00:21:55.040 "prchk_guard": false, 00:21:55.040 "hdgst": false, 00:21:55.040 "ddgst": false, 00:21:55.040 "dhchap_key": "key1", 00:21:55.040 "dhchap_ctrlr_key": "ckey2", 00:21:55.040 "method": "bdev_nvme_attach_controller", 00:21:55.040 "req_id": 1 00:21:55.040 } 00:21:55.040 Got JSON-RPC error response 00:21:55.040 response: 00:21:55.040 { 00:21:55.040 "code": -5, 00:21:55.040 "message": "Input/output error" 00:21:55.040 } 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.040 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.979 request: 00:21:55.979 { 00:21:55.979 "name": "nvme0", 00:21:55.979 "trtype": "tcp", 00:21:55.979 "traddr": "10.0.0.2", 00:21:55.979 "adrfam": "ipv4", 00:21:55.979 "trsvcid": "4420", 00:21:55.979 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:55.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.979 "prchk_reftag": false, 00:21:55.979 "prchk_guard": false, 00:21:55.979 "hdgst": false, 00:21:55.979 "ddgst": false, 00:21:55.979 "dhchap_key": "key1", 00:21:55.979 "dhchap_ctrlr_key": "ckey1", 00:21:55.979 "method": "bdev_nvme_attach_controller", 00:21:55.979 "req_id": 1 00:21:55.979 } 00:21:55.979 Got JSON-RPC error response 00:21:55.979 response: 00:21:55.979 { 00:21:55.979 "code": -5, 00:21:55.979 "message": "Input/output error" 00:21:55.979 } 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 658125 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 658125 ']' 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 658125 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 658125 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 658125' 00:21:55.979 killing process with pid 658125 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 658125 00:21:55.979 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 658125 00:21:57.358 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:57.358 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:57.358 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:57.358 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.358 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=681168 00:21:57.358 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:57.358 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 681168 00:21:57.358 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 681168 ']' 00:21:57.358 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.358 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:57.358 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.358 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:57.358 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.294 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.294 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:58.294 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:58.294 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:58.294 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.294 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.294 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:58.294 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 681168 00:21:58.294 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 681168 ']' 00:21:58.294 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.294 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:58.294 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.294 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:58.294 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.552 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.552 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:58.552 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:58.552 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.552 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.810 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.810 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:58.810 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.810 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.810 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:58.810 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:58.810 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.810 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:58.810 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.810 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.810 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.810 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:58.810 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:59.749 00:21:59.749 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.749 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.749 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.008 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.008 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.008 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.008 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.008 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.008 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.008 { 00:22:00.008 "cntlid": 1, 00:22:00.008 "qid": 0, 00:22:00.008 "state": "enabled", 00:22:00.008 "thread": "nvmf_tgt_poll_group_000", 00:22:00.008 "listen_address": { 00:22:00.008 "trtype": "TCP", 00:22:00.008 "adrfam": "IPv4", 00:22:00.008 "traddr": "10.0.0.2", 00:22:00.008 "trsvcid": "4420" 00:22:00.008 }, 00:22:00.008 "peer_address": { 00:22:00.008 "trtype": "TCP", 00:22:00.008 "adrfam": "IPv4", 00:22:00.008 "traddr": "10.0.0.1", 00:22:00.008 "trsvcid": "41376" 00:22:00.008 }, 00:22:00.008 "auth": { 00:22:00.008 "state": "completed", 00:22:00.008 "digest": "sha512", 00:22:00.008 "dhgroup": "ffdhe8192" 00:22:00.008 } 00:22:00.008 } 00:22:00.008 ]' 00:22:00.008 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.008 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.008 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.008 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.008 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.008 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.008 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.008 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.574 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzllNTdiMWNjNDE0NTc5ZDdjMWY3YjAzMjZjNTJiZmViNGJjOWE5ZjkzMWM1YjIwY2RiM2Y0MTI1ODM1M2ZmOEEXW9w=: 00:22:01.512 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.512 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.512 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.512 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.512 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.512 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:01.512 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.512 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.512 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.512 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:01.512 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:01.512 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.512 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:01.512 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.512 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:01.512 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.512 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:01.512 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.512 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.512 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.771 request: 00:22:01.771 { 00:22:01.771 "name": "nvme0", 00:22:01.771 "trtype": "tcp", 00:22:01.771 "traddr": "10.0.0.2", 00:22:01.771 "adrfam": "ipv4", 00:22:01.771 "trsvcid": "4420", 00:22:01.771 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:01.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:01.771 "prchk_reftag": false, 00:22:01.771 "prchk_guard": false, 00:22:01.771 "hdgst": false, 00:22:01.771 "ddgst": false, 00:22:01.771 "dhchap_key": "key3", 00:22:01.771 "method": "bdev_nvme_attach_controller", 00:22:01.771 "req_id": 1 00:22:01.771 } 00:22:01.771 Got JSON-RPC error response 00:22:01.771 response: 00:22:01.771 { 00:22:01.771 "code": -5, 00:22:01.771 "message": "Input/output error" 00:22:01.771 } 00:22:01.771 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:01.771 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:01.771 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:01.771 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:01.771 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:01.771 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:01.771 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:01.771 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:02.029 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:02.029 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:02.029 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:02.029 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:02.029 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.029 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:02.029 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.029 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:02.029 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:02.289 request: 00:22:02.289 { 00:22:02.289 "name": "nvme0", 00:22:02.289 "trtype": "tcp", 00:22:02.289 "traddr": "10.0.0.2", 00:22:02.289 "adrfam": "ipv4", 00:22:02.289 "trsvcid": "4420", 00:22:02.289 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:02.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:02.289 "prchk_reftag": false, 00:22:02.289 "prchk_guard": false, 00:22:02.289 "hdgst": false, 00:22:02.289 "ddgst": false, 00:22:02.289 "dhchap_key": "key3", 00:22:02.289 "method": "bdev_nvme_attach_controller", 00:22:02.289 "req_id": 1 00:22:02.289 } 00:22:02.289 Got JSON-RPC error response 00:22:02.289 response: 00:22:02.289 { 00:22:02.289 "code": -5, 00:22:02.289 "message": "Input/output error" 00:22:02.289 } 00:22:02.548 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:02.548 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:02.548 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:02.548 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:02.548 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:02.548 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:02.548 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:02.548 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.548 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.548 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:02.807 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:03.065 request: 00:22:03.065 { 00:22:03.065 "name": "nvme0", 00:22:03.065 "trtype": "tcp", 00:22:03.065 "traddr": "10.0.0.2", 00:22:03.065 "adrfam": "ipv4", 00:22:03.065 "trsvcid": "4420", 00:22:03.065 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:03.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.065 "prchk_reftag": false, 00:22:03.065 "prchk_guard": false, 00:22:03.065 "hdgst": false, 00:22:03.065 "ddgst": false, 00:22:03.065 "dhchap_key": "key0", 00:22:03.065 "dhchap_ctrlr_key": "key1", 00:22:03.065 "method": "bdev_nvme_attach_controller", 00:22:03.065 "req_id": 1 00:22:03.065 } 00:22:03.065 Got JSON-RPC error response 00:22:03.065 response: 00:22:03.065 { 00:22:03.065 "code": -5, 00:22:03.065 "message": "Input/output error" 00:22:03.065 } 00:22:03.065 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:03.065 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:03.065 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:03.065 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:03.065 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:03.066 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:03.324 00:22:03.324 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:03.324 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:03.324 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.582 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.582 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.582 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.839 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:03.839 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:03.839 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 658279 00:22:03.839 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 658279 ']' 00:22:03.839 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 658279 00:22:03.839 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:03.839 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.839 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 658279 00:22:03.839 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:03.839 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:03.839 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 658279' 00:22:03.839 killing process with pid 658279 00:22:03.839 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 658279 00:22:03.839 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 658279 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.389 rmmod nvme_tcp 00:22:06.389 rmmod nvme_fabrics 00:22:06.389 rmmod nvme_keyring 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 681168 ']' 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 681168 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 681168 ']' 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 681168 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 681168 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 681168' 00:22:06.389 killing process with pid 681168 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 681168 00:22:06.389 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 681168 00:22:07.766 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:07.766 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:07.766 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:07.766 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:07.766 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:07.766 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.766 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.766 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.zaa /tmp/spdk.key-sha256.fgv /tmp/spdk.key-sha384.hOP /tmp/spdk.key-sha512.SIt /tmp/spdk.key-sha512.To8 /tmp/spdk.key-sha384.0oU /tmp/spdk.key-sha256.At6 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:09.671 00:22:09.671 real 3m17.227s 00:22:09.671 user 7m35.392s 00:22:09.671 sys 0m24.875s 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.671 ************************************ 00:22:09.671 END TEST nvmf_auth_target 00:22:09.671 ************************************ 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:09.671 ************************************ 00:22:09.671 START TEST nvmf_bdevio_no_huge 00:22:09.671 ************************************ 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:09.671 * Looking for test storage... 00:22:09.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:09.671 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.672 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:09.672 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:09.672 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:09.672 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.672 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.672 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.672 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:09.672 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:09.672 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:09.672 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:11.576 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:11.577 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:11.577 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:11.577 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:11.577 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:11.577 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:11.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:22:11.836 00:22:11.836 --- 10.0.0.2 ping statistics --- 00:22:11.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.836 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:22:11.836 00:22:11.836 --- 10.0.0.1 ping statistics --- 00:22:11.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.836 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=684349 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 684349 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 684349 ']' 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:11.836 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:11.836 [2024-07-26 16:27:31.495895] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:11.836 [2024-07-26 16:27:31.496033] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:12.095 [2024-07-26 16:27:31.653697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:12.355 [2024-07-26 16:27:31.936829] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.355 [2024-07-26 16:27:31.936912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.355 [2024-07-26 16:27:31.936940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.355 [2024-07-26 16:27:31.936966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.355 [2024-07-26 16:27:31.936991] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.355 [2024-07-26 16:27:31.937129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:12.355 [2024-07-26 16:27:31.937191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:12.355 [2024-07-26 16:27:31.937615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:12.355 [2024-07-26 16:27:31.937661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:12.936 [2024-07-26 16:27:32.512619] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:12.936 Malloc0 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:12.936 [2024-07-26 16:27:32.602917] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.936 { 00:22:12.936 "params": { 00:22:12.936 "name": "Nvme$subsystem", 00:22:12.936 "trtype": "$TEST_TRANSPORT", 00:22:12.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.936 "adrfam": "ipv4", 00:22:12.936 "trsvcid": "$NVMF_PORT", 00:22:12.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.936 "hdgst": ${hdgst:-false}, 00:22:12.936 "ddgst": ${ddgst:-false} 00:22:12.936 }, 00:22:12.936 "method": "bdev_nvme_attach_controller" 00:22:12.936 } 00:22:12.936 EOF 00:22:12.936 )") 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:12.936 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:12.936 "params": { 00:22:12.936 "name": "Nvme1", 00:22:12.936 "trtype": "tcp", 00:22:12.936 "traddr": "10.0.0.2", 00:22:12.936 "adrfam": "ipv4", 00:22:12.936 "trsvcid": "4420", 00:22:12.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.936 "hdgst": false, 00:22:12.936 "ddgst": false 00:22:12.936 }, 00:22:12.936 "method": "bdev_nvme_attach_controller" 00:22:12.936 }' 00:22:12.936 [2024-07-26 16:27:32.681906] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:12.936 [2024-07-26 16:27:32.682056] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid684512 ] 00:22:13.195 [2024-07-26 16:27:32.826795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:13.454 [2024-07-26 16:27:33.080317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.454 [2024-07-26 16:27:33.080361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.454 [2024-07-26 16:27:33.080367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.022 I/O targets: 00:22:14.022 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:14.022 00:22:14.022 00:22:14.022 CUnit - A unit testing framework for C - Version 2.1-3 00:22:14.022 http://cunit.sourceforge.net/ 00:22:14.022 00:22:14.022 00:22:14.022 Suite: bdevio tests on: Nvme1n1 00:22:14.022 Test: blockdev write read block ...passed 00:22:14.022 Test: blockdev write zeroes read block ...passed 00:22:14.022 Test: blockdev write zeroes read no split ...passed 00:22:14.282 Test: blockdev write zeroes read split ...passed 00:22:14.282 Test: blockdev write zeroes read split partial ...passed 00:22:14.282 Test: blockdev reset ...[2024-07-26 16:27:33.881070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:14.282 [2024-07-26 16:27:33.881277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:22:14.282 [2024-07-26 16:27:33.897536] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:14.282 passed 00:22:14.282 Test: blockdev write read 8 blocks ...passed 00:22:14.282 Test: blockdev write read size > 128k ...passed 00:22:14.282 Test: blockdev write read invalid size ...passed 00:22:14.282 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:14.282 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:14.282 Test: blockdev write read max offset ...passed 00:22:14.282 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:14.282 Test: blockdev writev readv 8 blocks ...passed 00:22:14.282 Test: blockdev writev readv 30 x 1block ...passed 00:22:14.540 Test: blockdev writev readv block ...passed 00:22:14.540 Test: blockdev writev readv size > 128k ...passed 00:22:14.540 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:14.540 Test: blockdev comparev and writev ...[2024-07-26 16:27:34.078311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.540 [2024-07-26 16:27:34.078395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.540 [2024-07-26 16:27:34.078436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.540 [2024-07-26 16:27:34.078463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.540 [2024-07-26 16:27:34.079004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.540 [2024-07-26 16:27:34.079039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:14.540 [2024-07-26 16:27:34.079080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.540 [2024-07-26 16:27:34.079108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.540 [2024-07-26 16:27:34.079621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.540 [2024-07-26 16:27:34.079661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:14.541 [2024-07-26 16:27:34.079697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.541 [2024-07-26 16:27:34.079723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.541 [2024-07-26 16:27:34.080253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.541 [2024-07-26 16:27:34.080287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:14.541 [2024-07-26 16:27:34.080327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.541 [2024-07-26 16:27:34.080354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.541 passed 00:22:14.541 Test: blockdev nvme passthru rw ...passed 00:22:14.541 Test: blockdev nvme passthru vendor specific ...[2024-07-26 16:27:34.164613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:14.541 [2024-07-26 16:27:34.164675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.541 [2024-07-26 16:27:34.164974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:14.541 [2024-07-26 16:27:34.165008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:14.541 [2024-07-26 16:27:34.165260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:14.541 [2024-07-26 16:27:34.165293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.541 [2024-07-26 16:27:34.165536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:14.541 [2024-07-26 16:27:34.165569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:14.541 passed 00:22:14.541 Test: blockdev nvme admin passthru ...passed 00:22:14.541 Test: blockdev copy ...passed 00:22:14.541 00:22:14.541 Run Summary: Type Total Ran Passed Failed Inactive 00:22:14.541 suites 1 1 n/a 0 0 00:22:14.541 tests 23 23 23 0 0 00:22:14.541 asserts 152 152 152 0 n/a 00:22:14.541 00:22:14.541 Elapsed time = 1.104 seconds 00:22:15.476 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:15.476 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.476 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.476 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.476 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:15.476 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:15.476 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:15.476 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:15.476 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:15.476 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:15.476 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:15.476 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:15.476 rmmod nvme_tcp 00:22:15.476 rmmod nvme_fabrics 00:22:15.476 rmmod nvme_keyring 00:22:15.476 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:15.476 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:15.476 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:15.476 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 684349 ']' 00:22:15.476 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 684349 00:22:15.476 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 684349 ']' 00:22:15.476 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 684349 00:22:15.476 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:22:15.476 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:15.476 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 684349 00:22:15.476 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:22:15.476 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:22:15.476 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 684349' 00:22:15.476 killing process with pid 684349 00:22:15.476 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 684349 00:22:15.476 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 684349 00:22:16.413 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:16.413 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:16.414 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:16.414 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:16.414 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:16.414 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.414 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.414 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.323 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:18.323 00:22:18.323 real 0m8.699s 00:22:18.323 user 0m20.232s 00:22:18.323 sys 0m2.764s 00:22:18.323 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:18.323 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:18.323 ************************************ 00:22:18.323 END TEST nvmf_bdevio_no_huge 00:22:18.323 ************************************ 00:22:18.323 16:27:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:18.323 16:27:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:18.323 16:27:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:18.323 16:27:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:18.323 ************************************ 00:22:18.323 START TEST nvmf_tls 00:22:18.323 ************************************ 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:18.323 * Looking for test storage... 00:22:18.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.323 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:18.324 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.324 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:18.324 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:18.324 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:18.324 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.324 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.324 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.324 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:18.324 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:18.324 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:18.324 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:18.582 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:18.582 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:18.582 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.582 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:18.582 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:18.582 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:18.582 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.582 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.582 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.582 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:18.582 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:18.582 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:18.582 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:20.488 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:20.488 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:20.488 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:20.488 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.488 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:20.489 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:20.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:20.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:22:20.489 00:22:20.489 --- 10.0.0.2 ping statistics --- 00:22:20.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.489 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:20.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:20.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:22:20.489 00:22:20.489 --- 10.0.0.1 ping statistics --- 00:22:20.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.489 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=686714 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 686714 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 686714 ']' 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:20.489 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.489 [2024-07-26 16:27:40.161103] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:20.489 [2024-07-26 16:27:40.161252] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.489 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.748 [2024-07-26 16:27:40.291854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.008 [2024-07-26 16:27:40.530940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.008 [2024-07-26 16:27:40.531019] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.008 [2024-07-26 16:27:40.531048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.008 [2024-07-26 16:27:40.531085] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.008 [2024-07-26 16:27:40.531120] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.008 [2024-07-26 16:27:40.531174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.579 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:21.579 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:21.579 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:21.579 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:21.579 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.579 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.579 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:21.579 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:21.837 true 00:22:21.837 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:21.837 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:22.095 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:22.095 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:22.095 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:22.353 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:22.353 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:22.611 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:22.611 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:22.611 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:22.868 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:22.869 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:23.128 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:23.128 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:23.128 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.128 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:23.388 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:23.388 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:23.388 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:23.646 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.646 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:23.903 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:23.903 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:23.903 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:24.160 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:24.160 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:24.420 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:24.420 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:24.420 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:24.420 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.mZYBWbXbzO 00:22:24.420 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:24.420 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.ba2z26N2kf 00:22:24.420 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:24.420 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:24.420 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.mZYBWbXbzO 00:22:24.420 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ba2z26N2kf 00:22:24.420 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:24.678 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:25.245 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.mZYBWbXbzO 00:22:25.245 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.mZYBWbXbzO 00:22:25.245 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:25.503 [2024-07-26 16:27:45.177333] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.503 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:25.761 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:26.021 [2024-07-26 16:27:45.662702] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:26.021 [2024-07-26 16:27:45.663014] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.021 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:26.279 malloc0 00:22:26.279 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:26.539 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mZYBWbXbzO 00:22:26.799 [2024-07-26 16:27:46.453734] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:26.799 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.mZYBWbXbzO 00:22:26.799 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.005 Initializing NVMe Controllers 00:22:39.005 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:39.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:39.005 Initialization complete. Launching workers. 00:22:39.005 ======================================================== 00:22:39.005 Latency(us) 00:22:39.005 Device Information : IOPS MiB/s Average min max 00:22:39.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5613.99 21.93 11404.77 2145.41 12819.01 00:22:39.005 ======================================================== 00:22:39.005 Total : 5613.99 21.93 11404.77 2145.41 12819.01 00:22:39.005 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mZYBWbXbzO 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mZYBWbXbzO' 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=688731 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 688731 /var/tmp/bdevperf.sock 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 688731 ']' 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:39.005 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.005 [2024-07-26 16:27:56.765114] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:39.005 [2024-07-26 16:27:56.765268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688731 ] 00:22:39.005 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.005 [2024-07-26 16:27:56.886518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.005 [2024-07-26 16:27:57.114551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.005 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:39.005 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:39.005 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mZYBWbXbzO 00:22:39.005 [2024-07-26 16:27:57.959881] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.005 [2024-07-26 16:27:57.960080] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:39.005 TLSTESTn1 00:22:39.005 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:39.005 Running I/O for 10 seconds... 00:22:48.986 00:22:48.986 Latency(us) 00:22:48.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.986 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:48.986 Verification LBA range: start 0x0 length 0x2000 00:22:48.986 TLSTESTn1 : 10.05 2583.56 10.09 0.00 0.00 49405.12 7815.77 72623.60 00:22:48.986 =================================================================================================================== 00:22:48.986 Total : 2583.56 10.09 0.00 0.00 49405.12 7815.77 72623.60 00:22:48.986 0 00:22:48.986 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:48.986 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 688731 00:22:48.986 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 688731 ']' 00:22:48.986 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 688731 00:22:48.986 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:48.986 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:48.986 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 688731 00:22:48.986 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:48.986 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:48.986 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 688731' 00:22:48.986 killing process with pid 688731 00:22:48.986 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 688731 00:22:48.986 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.986 00:22:48.986 Latency(us) 00:22:48.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.986 =================================================================================================================== 00:22:48.986 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.986 [2024-07-26 16:28:08.276452] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:48.986 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 688731 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ba2z26N2kf 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ba2z26N2kf 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ba2z26N2kf 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ba2z26N2kf' 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=690183 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 690183 /var/tmp/bdevperf.sock 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 690183 ']' 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.554 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.816 [2024-07-26 16:28:09.344145] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:49.816 [2024-07-26 16:28:09.344306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690183 ] 00:22:49.816 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.816 [2024-07-26 16:28:09.467315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.108 [2024-07-26 16:28:09.695765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.674 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.674 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:50.674 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ba2z26N2kf 00:22:50.934 [2024-07-26 16:28:10.557408] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:50.934 [2024-07-26 16:28:10.557635] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:50.934 [2024-07-26 16:28:10.568398] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:50.934 [2024-07-26 16:28:10.569023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:22:50.934 [2024-07-26 16:28:10.569974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:22:50.934 [2024-07-26 16:28:10.570966] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:50.934 [2024-07-26 16:28:10.571002] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:50.934 [2024-07-26 16:28:10.571030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.934 request: 00:22:50.934 { 00:22:50.934 "name": "TLSTEST", 00:22:50.934 "trtype": "tcp", 00:22:50.934 "traddr": "10.0.0.2", 00:22:50.934 "adrfam": "ipv4", 00:22:50.934 "trsvcid": "4420", 00:22:50.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.934 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.934 "prchk_reftag": false, 00:22:50.934 "prchk_guard": false, 00:22:50.934 "hdgst": false, 00:22:50.934 "ddgst": false, 00:22:50.934 "psk": "/tmp/tmp.ba2z26N2kf", 00:22:50.934 "method": "bdev_nvme_attach_controller", 00:22:50.934 "req_id": 1 00:22:50.934 } 00:22:50.934 Got JSON-RPC error response 00:22:50.934 response: 00:22:50.934 { 00:22:50.934 "code": -5, 00:22:50.934 "message": "Input/output error" 00:22:50.934 } 00:22:50.934 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 690183 00:22:50.934 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 690183 ']' 00:22:50.934 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 690183 00:22:50.935 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:50.935 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:50.935 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 690183 00:22:50.935 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:50.935 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:50.935 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 690183' 00:22:50.935 killing process with pid 690183 00:22:50.935 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 690183 00:22:50.935 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.935 00:22:50.935 Latency(us) 00:22:50.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.935 =================================================================================================================== 00:22:50.935 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:50.935 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 690183 00:22:50.935 [2024-07-26 16:28:10.624191] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mZYBWbXbzO 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mZYBWbXbzO 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mZYBWbXbzO 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mZYBWbXbzO' 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=690457 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 690457 /var/tmp/bdevperf.sock 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 690457 ']' 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:51.873 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.132 [2024-07-26 16:28:11.661349] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:52.133 [2024-07-26 16:28:11.661514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690457 ] 00:22:52.133 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.133 [2024-07-26 16:28:11.783180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.392 [2024-07-26 16:28:12.011935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.958 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:52.958 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:52.958 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.mZYBWbXbzO 00:22:53.217 [2024-07-26 16:28:12.865635] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.217 [2024-07-26 16:28:12.865837] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:53.217 [2024-07-26 16:28:12.880322] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:53.217 [2024-07-26 16:28:12.880382] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:53.217 [2024-07-26 16:28:12.880456] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:53.217 [2024-07-26 16:28:12.880955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:22:53.217 [2024-07-26 16:28:12.881932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:22:53.217 [2024-07-26 16:28:12.882923] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:53.217 [2024-07-26 16:28:12.882958] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:53.217 [2024-07-26 16:28:12.882985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:53.217 request: 00:22:53.217 { 00:22:53.217 "name": "TLSTEST", 00:22:53.217 "trtype": "tcp", 00:22:53.217 "traddr": "10.0.0.2", 00:22:53.217 "adrfam": "ipv4", 00:22:53.217 "trsvcid": "4420", 00:22:53.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.217 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:53.217 "prchk_reftag": false, 00:22:53.217 "prchk_guard": false, 00:22:53.217 "hdgst": false, 00:22:53.217 "ddgst": false, 00:22:53.217 "psk": "/tmp/tmp.mZYBWbXbzO", 00:22:53.217 "method": "bdev_nvme_attach_controller", 00:22:53.217 "req_id": 1 00:22:53.217 } 00:22:53.217 Got JSON-RPC error response 00:22:53.217 response: 00:22:53.217 { 00:22:53.217 "code": -5, 00:22:53.217 "message": "Input/output error" 00:22:53.217 } 00:22:53.217 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 690457 00:22:53.217 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 690457 ']' 00:22:53.217 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 690457 00:22:53.217 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:53.217 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:53.217 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 690457 00:22:53.217 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:53.217 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:53.217 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 690457' 00:22:53.217 killing process with pid 690457 00:22:53.217 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 690457 00:22:53.217 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.217 00:22:53.217 Latency(us) 00:22:53.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.218 =================================================================================================================== 00:22:53.218 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.218 [2024-07-26 16:28:12.933705] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:53.218 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 690457 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mZYBWbXbzO 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mZYBWbXbzO 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mZYBWbXbzO 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mZYBWbXbzO' 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=690728 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 690728 /var/tmp/bdevperf.sock 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 690728 ']' 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.154 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.412 [2024-07-26 16:28:13.946667] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:54.412 [2024-07-26 16:28:13.946823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690728 ] 00:22:54.412 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.412 [2024-07-26 16:28:14.072992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.671 [2024-07-26 16:28:14.313040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.237 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:55.237 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:55.237 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mZYBWbXbzO 00:22:55.496 [2024-07-26 16:28:15.175724] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:55.496 [2024-07-26 16:28:15.175923] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:55.496 [2024-07-26 16:28:15.186293] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:55.496 [2024-07-26 16:28:15.186353] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:55.496 [2024-07-26 16:28:15.186424] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:55.496 [2024-07-26 16:28:15.187228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:22:55.496 [2024-07-26 16:28:15.188201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:22:55.496 [2024-07-26 16:28:15.189194] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:55.496 [2024-07-26 16:28:15.189229] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:55.496 [2024-07-26 16:28:15.189258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:55.496 request: 00:22:55.496 { 00:22:55.496 "name": "TLSTEST", 00:22:55.496 "trtype": "tcp", 00:22:55.496 "traddr": "10.0.0.2", 00:22:55.496 "adrfam": "ipv4", 00:22:55.496 "trsvcid": "4420", 00:22:55.496 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:55.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.496 "prchk_reftag": false, 00:22:55.496 "prchk_guard": false, 00:22:55.496 "hdgst": false, 00:22:55.496 "ddgst": false, 00:22:55.496 "psk": "/tmp/tmp.mZYBWbXbzO", 00:22:55.496 "method": "bdev_nvme_attach_controller", 00:22:55.496 "req_id": 1 00:22:55.496 } 00:22:55.496 Got JSON-RPC error response 00:22:55.496 response: 00:22:55.496 { 00:22:55.496 "code": -5, 00:22:55.496 "message": "Input/output error" 00:22:55.496 } 00:22:55.496 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 690728 00:22:55.496 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 690728 ']' 00:22:55.496 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 690728 00:22:55.496 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:55.496 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:55.496 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 690728 00:22:55.496 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:55.496 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:55.496 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 690728' 00:22:55.496 killing process with pid 690728 00:22:55.496 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 690728 00:22:55.496 Received shutdown signal, test time was about 10.000000 seconds 00:22:55.496 00:22:55.496 Latency(us) 00:22:55.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.496 =================================================================================================================== 00:22:55.496 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:55.496 [2024-07-26 16:28:15.239831] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:55.496 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 690728 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=691009 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 691009 /var/tmp/bdevperf.sock 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 691009 ']' 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.434 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:56.435 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.435 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:56.435 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.692 [2024-07-26 16:28:16.252367] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:56.692 [2024-07-26 16:28:16.252510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691009 ] 00:22:56.692 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.692 [2024-07-26 16:28:16.373513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.950 [2024-07-26 16:28:16.598621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.514 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:57.514 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:57.514 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:57.773 [2024-07-26 16:28:17.402227] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:57.773 [2024-07-26 16:28:17.404362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:22:57.773 [2024-07-26 16:28:17.405328] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:57.773 [2024-07-26 16:28:17.405388] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:57.773 [2024-07-26 16:28:17.405414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:57.773 request: 00:22:57.773 { 00:22:57.773 "name": "TLSTEST", 00:22:57.773 "trtype": "tcp", 00:22:57.773 "traddr": "10.0.0.2", 00:22:57.773 "adrfam": "ipv4", 00:22:57.773 "trsvcid": "4420", 00:22:57.773 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.773 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.773 "prchk_reftag": false, 00:22:57.773 "prchk_guard": false, 00:22:57.773 "hdgst": false, 00:22:57.773 "ddgst": false, 00:22:57.773 "method": "bdev_nvme_attach_controller", 00:22:57.773 "req_id": 1 00:22:57.773 } 00:22:57.773 Got JSON-RPC error response 00:22:57.773 response: 00:22:57.773 { 00:22:57.773 "code": -5, 00:22:57.773 "message": "Input/output error" 00:22:57.773 } 00:22:57.773 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 691009 00:22:57.773 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 691009 ']' 00:22:57.773 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 691009 00:22:57.773 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:57.773 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:57.773 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 691009 00:22:57.773 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:57.773 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:57.773 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 691009' 00:22:57.773 killing process with pid 691009 00:22:57.773 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 691009 00:22:57.773 Received shutdown signal, test time was about 10.000000 seconds 00:22:57.773 00:22:57.773 Latency(us) 00:22:57.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.773 =================================================================================================================== 00:22:57.773 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:57.773 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 691009 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 686714 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 686714 ']' 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 686714 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 686714 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 686714' 00:22:58.711 killing process with pid 686714 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 686714 00:22:58.711 [2024-07-26 16:28:18.446905] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:58.711 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 686714 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.oDLnr2BK6p 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.oDLnr2BK6p 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.612 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=691425 00:23:00.613 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:00.613 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 691425 00:23:00.613 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 691425 ']' 00:23:00.613 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.613 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:00.613 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.613 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:00.613 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.613 [2024-07-26 16:28:20.067754] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:00.613 [2024-07-26 16:28:20.067895] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.613 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.613 [2024-07-26 16:28:20.202331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.870 [2024-07-26 16:28:20.450985] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.870 [2024-07-26 16:28:20.451069] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.870 [2024-07-26 16:28:20.451100] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.870 [2024-07-26 16:28:20.451126] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.870 [2024-07-26 16:28:20.451147] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.870 [2024-07-26 16:28:20.451194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.438 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:01.438 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:01.438 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.438 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:01.438 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.438 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.438 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.oDLnr2BK6p 00:23:01.438 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.oDLnr2BK6p 00:23:01.438 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:01.695 [2024-07-26 16:28:21.275989] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.695 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:01.953 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:02.211 [2024-07-26 16:28:21.845617] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:02.211 [2024-07-26 16:28:21.845910] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.211 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:02.469 malloc0 00:23:02.469 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:02.727 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oDLnr2BK6p 00:23:02.985 [2024-07-26 16:28:22.726874] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:02.985 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oDLnr2BK6p 00:23:02.985 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:02.985 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:02.985 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:02.985 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.oDLnr2BK6p' 00:23:02.985 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:03.244 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=691718 00:23:03.244 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:03.244 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:03.244 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 691718 /var/tmp/bdevperf.sock 00:23:03.244 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 691718 ']' 00:23:03.244 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.244 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:03.244 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.244 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:03.244 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.244 [2024-07-26 16:28:22.828579] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:03.244 [2024-07-26 16:28:22.828727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691718 ] 00:23:03.244 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.244 [2024-07-26 16:28:22.954357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.503 [2024-07-26 16:28:23.184613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.094 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:04.094 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:04.094 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oDLnr2BK6p 00:23:04.354 [2024-07-26 16:28:24.064287] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.354 [2024-07-26 16:28:24.064500] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:04.615 TLSTESTn1 00:23:04.615 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:04.615 Running I/O for 10 seconds... 00:23:16.825 00:23:16.825 Latency(us) 00:23:16.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.825 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:16.825 Verification LBA range: start 0x0 length 0x2000 00:23:16.825 TLSTESTn1 : 10.06 2494.07 9.74 0.00 0.00 51159.20 13204.29 69905.07 00:23:16.825 =================================================================================================================== 00:23:16.825 Total : 2494.07 9.74 0.00 0.00 51159.20 13204.29 69905.07 00:23:16.825 0 00:23:16.825 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.825 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 691718 00:23:16.825 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 691718 ']' 00:23:16.825 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 691718 00:23:16.825 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:16.825 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:16.825 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 691718 00:23:16.825 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:16.825 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:16.825 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 691718' 00:23:16.825 killing process with pid 691718 00:23:16.825 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 691718 00:23:16.825 Received shutdown signal, test time was about 10.000000 seconds 00:23:16.825 00:23:16.825 Latency(us) 00:23:16.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.825 =================================================================================================================== 00:23:16.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.825 [2024-07-26 16:28:34.402858] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:16.825 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 691718 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.oDLnr2BK6p 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oDLnr2BK6p 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oDLnr2BK6p 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oDLnr2BK6p 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.oDLnr2BK6p' 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=693179 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 693179 /var/tmp/bdevperf.sock 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 693179 ']' 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.825 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.825 [2024-07-26 16:28:35.494628] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:16.825 [2024-07-26 16:28:35.494781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693179 ] 00:23:16.825 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.825 [2024-07-26 16:28:35.616868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.825 [2024-07-26 16:28:35.843263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.825 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.825 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:16.825 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oDLnr2BK6p 00:23:17.084 [2024-07-26 16:28:36.666949] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.084 [2024-07-26 16:28:36.667070] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:17.084 [2024-07-26 16:28:36.667105] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.oDLnr2BK6p 00:23:17.084 request: 00:23:17.084 { 00:23:17.084 "name": "TLSTEST", 00:23:17.084 "trtype": "tcp", 00:23:17.084 "traddr": "10.0.0.2", 00:23:17.084 "adrfam": "ipv4", 00:23:17.084 "trsvcid": "4420", 00:23:17.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.084 "prchk_reftag": false, 00:23:17.084 "prchk_guard": false, 00:23:17.084 "hdgst": false, 00:23:17.084 "ddgst": false, 00:23:17.084 "psk": "/tmp/tmp.oDLnr2BK6p", 00:23:17.084 "method": "bdev_nvme_attach_controller", 00:23:17.084 "req_id": 1 00:23:17.084 } 00:23:17.084 Got JSON-RPC error response 00:23:17.084 response: 00:23:17.084 { 00:23:17.084 "code": -1, 00:23:17.084 "message": "Operation not permitted" 00:23:17.084 } 00:23:17.084 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 693179 00:23:17.084 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 693179 ']' 00:23:17.084 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 693179 00:23:17.084 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:17.084 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:17.084 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 693179 00:23:17.084 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:17.084 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:17.084 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 693179' 00:23:17.084 killing process with pid 693179 00:23:17.084 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 693179 00:23:17.084 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.084 00:23:17.084 Latency(us) 00:23:17.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.084 =================================================================================================================== 00:23:17.084 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:17.084 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 693179 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 691425 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 691425 ']' 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 691425 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 691425 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 691425' 00:23:18.024 killing process with pid 691425 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 691425 00:23:18.024 [2024-07-26 16:28:37.670279] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:18.024 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 691425 00:23:19.402 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:19.402 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:19.402 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:19.402 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.402 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:19.402 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=693700 00:23:19.402 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 693700 00:23:19.402 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 693700 ']' 00:23:19.402 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.402 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.402 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.402 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.402 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.661 [2024-07-26 16:28:39.233167] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:19.661 [2024-07-26 16:28:39.233322] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.661 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.661 [2024-07-26 16:28:39.376053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.921 [2024-07-26 16:28:39.632756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.921 [2024-07-26 16:28:39.632838] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.921 [2024-07-26 16:28:39.632867] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.921 [2024-07-26 16:28:39.632893] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.921 [2024-07-26 16:28:39.632916] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.921 [2024-07-26 16:28:39.632967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.oDLnr2BK6p 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.oDLnr2BK6p 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.oDLnr2BK6p 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.oDLnr2BK6p 00:23:20.487 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:20.745 [2024-07-26 16:28:40.452252] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.745 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:21.002 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:21.260 [2024-07-26 16:28:40.925539] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:21.260 [2024-07-26 16:28:40.925889] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.260 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:21.519 malloc0 00:23:21.778 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:22.037 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oDLnr2BK6p 00:23:22.037 [2024-07-26 16:28:41.780854] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:22.037 [2024-07-26 16:28:41.780924] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:22.037 [2024-07-26 16:28:41.780968] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:22.037 request: 00:23:22.037 { 00:23:22.037 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.037 "host": "nqn.2016-06.io.spdk:host1", 00:23:22.037 "psk": "/tmp/tmp.oDLnr2BK6p", 00:23:22.037 "method": "nvmf_subsystem_add_host", 00:23:22.037 "req_id": 1 00:23:22.037 } 00:23:22.037 Got JSON-RPC error response 00:23:22.037 response: 00:23:22.037 { 00:23:22.037 "code": -32603, 00:23:22.037 "message": "Internal error" 00:23:22.037 } 00:23:22.296 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:22.296 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:22.296 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:22.296 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:22.296 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 693700 00:23:22.296 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 693700 ']' 00:23:22.296 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 693700 00:23:22.296 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:22.296 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.296 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 693700 00:23:22.296 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:22.296 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:22.296 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 693700' 00:23:22.296 killing process with pid 693700 00:23:22.296 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 693700 00:23:22.296 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 693700 00:23:23.670 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.oDLnr2BK6p 00:23:23.670 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:23.670 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.670 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:23.670 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.670 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=694137 00:23:23.670 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:23.670 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 694137 00:23:23.670 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 694137 ']' 00:23:23.670 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.670 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:23.670 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.670 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:23.670 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.670 [2024-07-26 16:28:43.364527] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:23.670 [2024-07-26 16:28:43.364714] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.929 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.929 [2024-07-26 16:28:43.510202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.188 [2024-07-26 16:28:43.765073] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.188 [2024-07-26 16:28:43.765154] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.188 [2024-07-26 16:28:43.765182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.188 [2024-07-26 16:28:43.765207] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.188 [2024-07-26 16:28:43.765229] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.188 [2024-07-26 16:28:43.765275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.754 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:24.754 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:24.754 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.754 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:24.754 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.754 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.754 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.oDLnr2BK6p 00:23:24.754 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.oDLnr2BK6p 00:23:24.754 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:25.013 [2024-07-26 16:28:44.525269] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.013 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:25.273 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:25.273 [2024-07-26 16:28:45.022668] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:25.273 [2024-07-26 16:28:45.022991] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.533 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:25.793 malloc0 00:23:25.793 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:26.052 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oDLnr2BK6p 00:23:26.311 [2024-07-26 16:28:45.824717] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:26.311 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=694548 00:23:26.311 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:26.311 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:26.311 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 694548 /var/tmp/bdevperf.sock 00:23:26.311 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 694548 ']' 00:23:26.311 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.311 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:26.311 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.311 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:26.311 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.311 [2024-07-26 16:28:45.916284] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:26.311 [2024-07-26 16:28:45.916448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid694548 ] 00:23:26.311 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.311 [2024-07-26 16:28:46.036911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.576 [2024-07-26 16:28:46.263113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.196 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:27.196 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:27.196 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oDLnr2BK6p 00:23:27.454 [2024-07-26 16:28:47.064214] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.454 [2024-07-26 16:28:47.064411] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:27.454 TLSTESTn1 00:23:27.454 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:28.020 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:28.020 "subsystems": [ 00:23:28.020 { 00:23:28.020 "subsystem": "keyring", 00:23:28.020 "config": [] 00:23:28.020 }, 00:23:28.020 { 00:23:28.020 "subsystem": "iobuf", 00:23:28.020 "config": [ 00:23:28.020 { 00:23:28.020 "method": "iobuf_set_options", 00:23:28.020 "params": { 00:23:28.020 "small_pool_count": 8192, 00:23:28.020 "large_pool_count": 1024, 00:23:28.020 "small_bufsize": 8192, 00:23:28.020 "large_bufsize": 135168 00:23:28.020 } 00:23:28.020 } 00:23:28.020 ] 00:23:28.020 }, 00:23:28.020 { 00:23:28.021 "subsystem": "sock", 00:23:28.021 "config": [ 00:23:28.021 { 00:23:28.021 "method": "sock_set_default_impl", 00:23:28.021 "params": { 00:23:28.021 "impl_name": "posix" 00:23:28.021 } 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "method": "sock_impl_set_options", 00:23:28.021 "params": { 00:23:28.021 "impl_name": "ssl", 00:23:28.021 "recv_buf_size": 4096, 00:23:28.021 "send_buf_size": 4096, 00:23:28.021 "enable_recv_pipe": true, 00:23:28.021 "enable_quickack": false, 00:23:28.021 "enable_placement_id": 0, 00:23:28.021 "enable_zerocopy_send_server": true, 00:23:28.021 "enable_zerocopy_send_client": false, 00:23:28.021 "zerocopy_threshold": 0, 00:23:28.021 "tls_version": 0, 00:23:28.021 "enable_ktls": false 00:23:28.021 } 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "method": "sock_impl_set_options", 00:23:28.021 "params": { 00:23:28.021 "impl_name": "posix", 00:23:28.021 "recv_buf_size": 2097152, 00:23:28.021 "send_buf_size": 2097152, 00:23:28.021 "enable_recv_pipe": true, 00:23:28.021 "enable_quickack": false, 00:23:28.021 "enable_placement_id": 0, 00:23:28.021 "enable_zerocopy_send_server": true, 00:23:28.021 "enable_zerocopy_send_client": false, 00:23:28.021 "zerocopy_threshold": 0, 00:23:28.021 "tls_version": 0, 00:23:28.021 "enable_ktls": false 00:23:28.021 } 00:23:28.021 } 00:23:28.021 ] 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "subsystem": "vmd", 00:23:28.021 "config": [] 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "subsystem": "accel", 00:23:28.021 "config": [ 00:23:28.021 { 00:23:28.021 "method": "accel_set_options", 00:23:28.021 "params": { 00:23:28.021 "small_cache_size": 128, 00:23:28.021 "large_cache_size": 16, 00:23:28.021 "task_count": 2048, 00:23:28.021 "sequence_count": 2048, 00:23:28.021 "buf_count": 2048 00:23:28.021 } 00:23:28.021 } 00:23:28.021 ] 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "subsystem": "bdev", 00:23:28.021 "config": [ 00:23:28.021 { 00:23:28.021 "method": "bdev_set_options", 00:23:28.021 "params": { 00:23:28.021 "bdev_io_pool_size": 65535, 00:23:28.021 "bdev_io_cache_size": 256, 00:23:28.021 "bdev_auto_examine": true, 00:23:28.021 "iobuf_small_cache_size": 128, 00:23:28.021 "iobuf_large_cache_size": 16 00:23:28.021 } 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "method": "bdev_raid_set_options", 00:23:28.021 "params": { 00:23:28.021 "process_window_size_kb": 1024, 00:23:28.021 "process_max_bandwidth_mb_sec": 0 00:23:28.021 } 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "method": "bdev_iscsi_set_options", 00:23:28.021 "params": { 00:23:28.021 "timeout_sec": 30 00:23:28.021 } 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "method": "bdev_nvme_set_options", 00:23:28.021 "params": { 00:23:28.021 "action_on_timeout": "none", 00:23:28.021 "timeout_us": 0, 00:23:28.021 "timeout_admin_us": 0, 00:23:28.021 "keep_alive_timeout_ms": 10000, 00:23:28.021 "arbitration_burst": 0, 00:23:28.021 "low_priority_weight": 0, 00:23:28.021 "medium_priority_weight": 0, 00:23:28.021 "high_priority_weight": 0, 00:23:28.021 "nvme_adminq_poll_period_us": 10000, 00:23:28.021 "nvme_ioq_poll_period_us": 0, 00:23:28.021 "io_queue_requests": 0, 00:23:28.021 "delay_cmd_submit": true, 00:23:28.021 "transport_retry_count": 4, 00:23:28.021 "bdev_retry_count": 3, 00:23:28.021 "transport_ack_timeout": 0, 00:23:28.021 "ctrlr_loss_timeout_sec": 0, 00:23:28.021 "reconnect_delay_sec": 0, 00:23:28.021 "fast_io_fail_timeout_sec": 0, 00:23:28.021 "disable_auto_failback": false, 00:23:28.021 "generate_uuids": false, 00:23:28.021 "transport_tos": 0, 00:23:28.021 "nvme_error_stat": false, 00:23:28.021 "rdma_srq_size": 0, 00:23:28.021 "io_path_stat": false, 00:23:28.021 "allow_accel_sequence": false, 00:23:28.021 "rdma_max_cq_size": 0, 00:23:28.021 "rdma_cm_event_timeout_ms": 0, 00:23:28.021 "dhchap_digests": [ 00:23:28.021 "sha256", 00:23:28.021 "sha384", 00:23:28.021 "sha512" 00:23:28.021 ], 00:23:28.021 "dhchap_dhgroups": [ 00:23:28.021 "null", 00:23:28.021 "ffdhe2048", 00:23:28.021 "ffdhe3072", 00:23:28.021 "ffdhe4096", 00:23:28.021 "ffdhe6144", 00:23:28.021 "ffdhe8192" 00:23:28.021 ] 00:23:28.021 } 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "method": "bdev_nvme_set_hotplug", 00:23:28.021 "params": { 00:23:28.021 "period_us": 100000, 00:23:28.021 "enable": false 00:23:28.021 } 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "method": "bdev_malloc_create", 00:23:28.021 "params": { 00:23:28.021 "name": "malloc0", 00:23:28.021 "num_blocks": 8192, 00:23:28.021 "block_size": 4096, 00:23:28.021 "physical_block_size": 4096, 00:23:28.021 "uuid": "a5db13b0-38c6-4e85-93cf-e0fa910b857c", 00:23:28.021 "optimal_io_boundary": 0, 00:23:28.021 "md_size": 0, 00:23:28.021 "dif_type": 0, 00:23:28.021 "dif_is_head_of_md": false, 00:23:28.021 "dif_pi_format": 0 00:23:28.021 } 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "method": "bdev_wait_for_examine" 00:23:28.021 } 00:23:28.021 ] 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "subsystem": "nbd", 00:23:28.021 "config": [] 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "subsystem": "scheduler", 00:23:28.021 "config": [ 00:23:28.021 { 00:23:28.021 "method": "framework_set_scheduler", 00:23:28.021 "params": { 00:23:28.021 "name": "static" 00:23:28.021 } 00:23:28.021 } 00:23:28.021 ] 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "subsystem": "nvmf", 00:23:28.021 "config": [ 00:23:28.021 { 00:23:28.021 "method": "nvmf_set_config", 00:23:28.021 "params": { 00:23:28.021 "discovery_filter": "match_any", 00:23:28.021 "admin_cmd_passthru": { 00:23:28.021 "identify_ctrlr": false 00:23:28.021 } 00:23:28.021 } 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "method": "nvmf_set_max_subsystems", 00:23:28.021 "params": { 00:23:28.021 "max_subsystems": 1024 00:23:28.021 } 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "method": "nvmf_set_crdt", 00:23:28.021 "params": { 00:23:28.021 "crdt1": 0, 00:23:28.021 "crdt2": 0, 00:23:28.021 "crdt3": 0 00:23:28.021 } 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "method": "nvmf_create_transport", 00:23:28.021 "params": { 00:23:28.021 "trtype": "TCP", 00:23:28.021 "max_queue_depth": 128, 00:23:28.021 "max_io_qpairs_per_ctrlr": 127, 00:23:28.021 "in_capsule_data_size": 4096, 00:23:28.021 "max_io_size": 131072, 00:23:28.021 "io_unit_size": 131072, 00:23:28.021 "max_aq_depth": 128, 00:23:28.021 "num_shared_buffers": 511, 00:23:28.021 "buf_cache_size": 4294967295, 00:23:28.021 "dif_insert_or_strip": false, 00:23:28.021 "zcopy": false, 00:23:28.021 "c2h_success": false, 00:23:28.021 "sock_priority": 0, 00:23:28.021 "abort_timeout_sec": 1, 00:23:28.021 "ack_timeout": 0, 00:23:28.021 "data_wr_pool_size": 0 00:23:28.021 } 00:23:28.021 }, 00:23:28.021 { 00:23:28.021 "method": "nvmf_create_subsystem", 00:23:28.021 "params": { 00:23:28.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.021 "allow_any_host": false, 00:23:28.021 "serial_number": "SPDK00000000000001", 00:23:28.021 "model_number": "SPDK bdev Controller", 00:23:28.021 "max_namespaces": 10, 00:23:28.021 "min_cntlid": 1, 00:23:28.021 "max_cntlid": 65519, 00:23:28.022 "ana_reporting": false 00:23:28.022 } 00:23:28.022 }, 00:23:28.022 { 00:23:28.022 "method": "nvmf_subsystem_add_host", 00:23:28.022 "params": { 00:23:28.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.022 "host": "nqn.2016-06.io.spdk:host1", 00:23:28.022 "psk": "/tmp/tmp.oDLnr2BK6p" 00:23:28.022 } 00:23:28.022 }, 00:23:28.022 { 00:23:28.022 "method": "nvmf_subsystem_add_ns", 00:23:28.022 "params": { 00:23:28.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.022 "namespace": { 00:23:28.022 "nsid": 1, 00:23:28.022 "bdev_name": "malloc0", 00:23:28.022 "nguid": "A5DB13B038C64E8593CFE0FA910B857C", 00:23:28.022 "uuid": "a5db13b0-38c6-4e85-93cf-e0fa910b857c", 00:23:28.022 "no_auto_visible": false 00:23:28.022 } 00:23:28.022 } 00:23:28.022 }, 00:23:28.022 { 00:23:28.022 "method": "nvmf_subsystem_add_listener", 00:23:28.022 "params": { 00:23:28.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.022 "listen_address": { 00:23:28.022 "trtype": "TCP", 00:23:28.022 "adrfam": "IPv4", 00:23:28.022 "traddr": "10.0.0.2", 00:23:28.022 "trsvcid": "4420" 00:23:28.022 }, 00:23:28.022 "secure_channel": true 00:23:28.022 } 00:23:28.022 } 00:23:28.022 ] 00:23:28.022 } 00:23:28.022 ] 00:23:28.022 }' 00:23:28.022 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:28.281 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:28.281 "subsystems": [ 00:23:28.281 { 00:23:28.281 "subsystem": "keyring", 00:23:28.281 "config": [] 00:23:28.281 }, 00:23:28.281 { 00:23:28.281 "subsystem": "iobuf", 00:23:28.281 "config": [ 00:23:28.281 { 00:23:28.281 "method": "iobuf_set_options", 00:23:28.281 "params": { 00:23:28.281 "small_pool_count": 8192, 00:23:28.281 "large_pool_count": 1024, 00:23:28.281 "small_bufsize": 8192, 00:23:28.281 "large_bufsize": 135168 00:23:28.281 } 00:23:28.281 } 00:23:28.281 ] 00:23:28.281 }, 00:23:28.281 { 00:23:28.281 "subsystem": "sock", 00:23:28.281 "config": [ 00:23:28.281 { 00:23:28.281 "method": "sock_set_default_impl", 00:23:28.281 "params": { 00:23:28.281 "impl_name": "posix" 00:23:28.281 } 00:23:28.281 }, 00:23:28.281 { 00:23:28.281 "method": "sock_impl_set_options", 00:23:28.281 "params": { 00:23:28.281 "impl_name": "ssl", 00:23:28.281 "recv_buf_size": 4096, 00:23:28.281 "send_buf_size": 4096, 00:23:28.281 "enable_recv_pipe": true, 00:23:28.281 "enable_quickack": false, 00:23:28.281 "enable_placement_id": 0, 00:23:28.281 "enable_zerocopy_send_server": true, 00:23:28.281 "enable_zerocopy_send_client": false, 00:23:28.281 "zerocopy_threshold": 0, 00:23:28.281 "tls_version": 0, 00:23:28.281 "enable_ktls": false 00:23:28.281 } 00:23:28.281 }, 00:23:28.281 { 00:23:28.281 "method": "sock_impl_set_options", 00:23:28.281 "params": { 00:23:28.281 "impl_name": "posix", 00:23:28.281 "recv_buf_size": 2097152, 00:23:28.281 "send_buf_size": 2097152, 00:23:28.281 "enable_recv_pipe": true, 00:23:28.281 "enable_quickack": false, 00:23:28.281 "enable_placement_id": 0, 00:23:28.281 "enable_zerocopy_send_server": true, 00:23:28.281 "enable_zerocopy_send_client": false, 00:23:28.281 "zerocopy_threshold": 0, 00:23:28.281 "tls_version": 0, 00:23:28.281 "enable_ktls": false 00:23:28.281 } 00:23:28.281 } 00:23:28.281 ] 00:23:28.281 }, 00:23:28.281 { 00:23:28.281 "subsystem": "vmd", 00:23:28.281 "config": [] 00:23:28.281 }, 00:23:28.281 { 00:23:28.281 "subsystem": "accel", 00:23:28.281 "config": [ 00:23:28.281 { 00:23:28.281 "method": "accel_set_options", 00:23:28.281 "params": { 00:23:28.281 "small_cache_size": 128, 00:23:28.281 "large_cache_size": 16, 00:23:28.281 "task_count": 2048, 00:23:28.281 "sequence_count": 2048, 00:23:28.281 "buf_count": 2048 00:23:28.281 } 00:23:28.281 } 00:23:28.281 ] 00:23:28.281 }, 00:23:28.281 { 00:23:28.281 "subsystem": "bdev", 00:23:28.281 "config": [ 00:23:28.281 { 00:23:28.281 "method": "bdev_set_options", 00:23:28.281 "params": { 00:23:28.281 "bdev_io_pool_size": 65535, 00:23:28.281 "bdev_io_cache_size": 256, 00:23:28.281 "bdev_auto_examine": true, 00:23:28.281 "iobuf_small_cache_size": 128, 00:23:28.281 "iobuf_large_cache_size": 16 00:23:28.281 } 00:23:28.281 }, 00:23:28.281 { 00:23:28.281 "method": "bdev_raid_set_options", 00:23:28.281 "params": { 00:23:28.281 "process_window_size_kb": 1024, 00:23:28.281 "process_max_bandwidth_mb_sec": 0 00:23:28.281 } 00:23:28.281 }, 00:23:28.281 { 00:23:28.281 "method": "bdev_iscsi_set_options", 00:23:28.281 "params": { 00:23:28.281 "timeout_sec": 30 00:23:28.281 } 00:23:28.281 }, 00:23:28.281 { 00:23:28.281 "method": "bdev_nvme_set_options", 00:23:28.281 "params": { 00:23:28.281 "action_on_timeout": "none", 00:23:28.281 "timeout_us": 0, 00:23:28.281 "timeout_admin_us": 0, 00:23:28.281 "keep_alive_timeout_ms": 10000, 00:23:28.281 "arbitration_burst": 0, 00:23:28.281 "low_priority_weight": 0, 00:23:28.281 "medium_priority_weight": 0, 00:23:28.281 "high_priority_weight": 0, 00:23:28.281 "nvme_adminq_poll_period_us": 10000, 00:23:28.281 "nvme_ioq_poll_period_us": 0, 00:23:28.281 "io_queue_requests": 512, 00:23:28.281 "delay_cmd_submit": true, 00:23:28.281 "transport_retry_count": 4, 00:23:28.281 "bdev_retry_count": 3, 00:23:28.281 "transport_ack_timeout": 0, 00:23:28.281 "ctrlr_loss_timeout_sec": 0, 00:23:28.281 "reconnect_delay_sec": 0, 00:23:28.281 "fast_io_fail_timeout_sec": 0, 00:23:28.281 "disable_auto_failback": false, 00:23:28.281 "generate_uuids": false, 00:23:28.281 "transport_tos": 0, 00:23:28.281 "nvme_error_stat": false, 00:23:28.281 "rdma_srq_size": 0, 00:23:28.281 "io_path_stat": false, 00:23:28.281 "allow_accel_sequence": false, 00:23:28.281 "rdma_max_cq_size": 0, 00:23:28.281 "rdma_cm_event_timeout_ms": 0, 00:23:28.281 "dhchap_digests": [ 00:23:28.281 "sha256", 00:23:28.281 "sha384", 00:23:28.281 "sha512" 00:23:28.281 ], 00:23:28.281 "dhchap_dhgroups": [ 00:23:28.281 "null", 00:23:28.281 "ffdhe2048", 00:23:28.281 "ffdhe3072", 00:23:28.281 "ffdhe4096", 00:23:28.281 "ffdhe6144", 00:23:28.281 "ffdhe8192" 00:23:28.281 ] 00:23:28.281 } 00:23:28.281 }, 00:23:28.281 { 00:23:28.281 "method": "bdev_nvme_attach_controller", 00:23:28.281 "params": { 00:23:28.281 "name": "TLSTEST", 00:23:28.281 "trtype": "TCP", 00:23:28.281 "adrfam": "IPv4", 00:23:28.281 "traddr": "10.0.0.2", 00:23:28.281 "trsvcid": "4420", 00:23:28.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.281 "prchk_reftag": false, 00:23:28.281 "prchk_guard": false, 00:23:28.281 "ctrlr_loss_timeout_sec": 0, 00:23:28.281 "reconnect_delay_sec": 0, 00:23:28.281 "fast_io_fail_timeout_sec": 0, 00:23:28.281 "psk": "/tmp/tmp.oDLnr2BK6p", 00:23:28.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.281 "hdgst": false, 00:23:28.281 "ddgst": false 00:23:28.281 } 00:23:28.281 }, 00:23:28.281 { 00:23:28.281 "method": "bdev_nvme_set_hotplug", 00:23:28.281 "params": { 00:23:28.281 "period_us": 100000, 00:23:28.281 "enable": false 00:23:28.281 } 00:23:28.281 }, 00:23:28.281 { 00:23:28.281 "method": "bdev_wait_for_examine" 00:23:28.281 } 00:23:28.281 ] 00:23:28.281 }, 00:23:28.281 { 00:23:28.281 "subsystem": "nbd", 00:23:28.281 "config": [] 00:23:28.281 } 00:23:28.281 ] 00:23:28.281 }' 00:23:28.281 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 694548 00:23:28.281 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 694548 ']' 00:23:28.281 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 694548 00:23:28.281 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:28.281 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.281 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 694548 00:23:28.281 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:28.281 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:28.281 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 694548' 00:23:28.281 killing process with pid 694548 00:23:28.281 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 694548 00:23:28.281 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.281 00:23:28.282 Latency(us) 00:23:28.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.282 =================================================================================================================== 00:23:28.282 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:28.282 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 694548 00:23:28.282 [2024-07-26 16:28:47.821324] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:29.220 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 694137 00:23:29.220 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 694137 ']' 00:23:29.220 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 694137 00:23:29.220 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:29.220 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.220 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 694137 00:23:29.220 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:29.220 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:29.220 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 694137' 00:23:29.220 killing process with pid 694137 00:23:29.220 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 694137 00:23:29.220 [2024-07-26 16:28:48.770214] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:29.220 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 694137 00:23:30.603 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:30.603 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:30.603 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:30.603 "subsystems": [ 00:23:30.603 { 00:23:30.603 "subsystem": "keyring", 00:23:30.603 "config": [] 00:23:30.603 }, 00:23:30.603 { 00:23:30.603 "subsystem": "iobuf", 00:23:30.603 "config": [ 00:23:30.603 { 00:23:30.603 "method": "iobuf_set_options", 00:23:30.603 "params": { 00:23:30.603 "small_pool_count": 8192, 00:23:30.603 "large_pool_count": 1024, 00:23:30.603 "small_bufsize": 8192, 00:23:30.603 "large_bufsize": 135168 00:23:30.603 } 00:23:30.603 } 00:23:30.603 ] 00:23:30.603 }, 00:23:30.603 { 00:23:30.603 "subsystem": "sock", 00:23:30.603 "config": [ 00:23:30.603 { 00:23:30.603 "method": "sock_set_default_impl", 00:23:30.603 "params": { 00:23:30.603 "impl_name": "posix" 00:23:30.603 } 00:23:30.603 }, 00:23:30.603 { 00:23:30.603 "method": "sock_impl_set_options", 00:23:30.603 "params": { 00:23:30.603 "impl_name": "ssl", 00:23:30.603 "recv_buf_size": 4096, 00:23:30.603 "send_buf_size": 4096, 00:23:30.603 "enable_recv_pipe": true, 00:23:30.603 "enable_quickack": false, 00:23:30.603 "enable_placement_id": 0, 00:23:30.603 "enable_zerocopy_send_server": true, 00:23:30.603 "enable_zerocopy_send_client": false, 00:23:30.603 "zerocopy_threshold": 0, 00:23:30.603 "tls_version": 0, 00:23:30.603 "enable_ktls": false 00:23:30.603 } 00:23:30.603 }, 00:23:30.603 { 00:23:30.603 "method": "sock_impl_set_options", 00:23:30.603 "params": { 00:23:30.603 "impl_name": "posix", 00:23:30.603 "recv_buf_size": 2097152, 00:23:30.603 "send_buf_size": 2097152, 00:23:30.603 "enable_recv_pipe": true, 00:23:30.603 "enable_quickack": false, 00:23:30.603 "enable_placement_id": 0, 00:23:30.603 "enable_zerocopy_send_server": true, 00:23:30.603 "enable_zerocopy_send_client": false, 00:23:30.603 "zerocopy_threshold": 0, 00:23:30.603 "tls_version": 0, 00:23:30.603 "enable_ktls": false 00:23:30.603 } 00:23:30.603 } 00:23:30.603 ] 00:23:30.603 }, 00:23:30.603 { 00:23:30.603 "subsystem": "vmd", 00:23:30.603 "config": [] 00:23:30.603 }, 00:23:30.603 { 00:23:30.603 "subsystem": "accel", 00:23:30.603 "config": [ 00:23:30.603 { 00:23:30.603 "method": "accel_set_options", 00:23:30.603 "params": { 00:23:30.603 "small_cache_size": 128, 00:23:30.603 "large_cache_size": 16, 00:23:30.603 "task_count": 2048, 00:23:30.603 "sequence_count": 2048, 00:23:30.603 "buf_count": 2048 00:23:30.603 } 00:23:30.603 } 00:23:30.603 ] 00:23:30.603 }, 00:23:30.603 { 00:23:30.603 "subsystem": "bdev", 00:23:30.603 "config": [ 00:23:30.603 { 00:23:30.603 "method": "bdev_set_options", 00:23:30.603 "params": { 00:23:30.603 "bdev_io_pool_size": 65535, 00:23:30.603 "bdev_io_cache_size": 256, 00:23:30.603 "bdev_auto_examine": true, 00:23:30.603 "iobuf_small_cache_size": 128, 00:23:30.603 "iobuf_large_cache_size": 16 00:23:30.603 } 00:23:30.603 }, 00:23:30.603 { 00:23:30.603 "method": "bdev_raid_set_options", 00:23:30.603 "params": { 00:23:30.603 "process_window_size_kb": 1024, 00:23:30.603 "process_max_bandwidth_mb_sec": 0 00:23:30.603 } 00:23:30.603 }, 00:23:30.603 { 00:23:30.603 "method": "bdev_iscsi_set_options", 00:23:30.603 "params": { 00:23:30.603 "timeout_sec": 30 00:23:30.603 } 00:23:30.603 }, 00:23:30.603 { 00:23:30.603 "method": "bdev_nvme_set_options", 00:23:30.603 "params": { 00:23:30.603 "action_on_timeout": "none", 00:23:30.603 "timeout_us": 0, 00:23:30.603 "timeout_admin_us": 0, 00:23:30.603 "keep_alive_timeout_ms": 10000, 00:23:30.603 "arbitration_burst": 0, 00:23:30.603 "low_priority_weight": 0, 00:23:30.603 "medium_priority_weight": 0, 00:23:30.603 "high_priority_weight": 0, 00:23:30.603 "nvme_adminq_poll_period_us": 10000, 00:23:30.603 "nvme_ioq_poll_period_us": 0, 00:23:30.603 "io_queue_requests": 0, 00:23:30.603 "delay_cmd_submit": true, 00:23:30.603 "transport_retry_count": 4, 00:23:30.603 "bdev_retry_count": 3, 00:23:30.603 "transport_ack_timeout": 0, 00:23:30.603 "ctrlr_loss_timeout_sec": 0, 00:23:30.603 "reconnect_delay_sec": 0, 00:23:30.603 "fast_io_fail_timeout_sec": 0, 00:23:30.603 "disable_auto_failback": false, 00:23:30.603 "generate_uuids": false, 00:23:30.603 "transport_tos": 0, 00:23:30.603 "nvme_error_stat": false, 00:23:30.603 "rdma_srq_size": 0, 00:23:30.603 "io_path_stat": false, 00:23:30.603 "allow_accel_sequence": false, 00:23:30.603 "rdma_max_cq_size": 0, 00:23:30.603 "rdma_cm_event_timeout_ms": 0, 00:23:30.603 "dhchap_digests": [ 00:23:30.603 "sha256", 00:23:30.604 "sha384", 00:23:30.604 "sha512" 00:23:30.604 ], 00:23:30.604 "dhchap_dhgroups": [ 00:23:30.604 "null", 00:23:30.604 "ffdhe2048", 00:23:30.604 "ffdhe3072", 00:23:30.604 "ffdhe4096", 00:23:30.604 "ffdhe6144", 00:23:30.604 "ffdhe8192" 00:23:30.604 ] 00:23:30.604 } 00:23:30.604 }, 00:23:30.604 { 00:23:30.604 "method": "bdev_nvme_set_hotplug", 00:23:30.604 "params": { 00:23:30.604 "period_us": 100000, 00:23:30.604 "enable": false 00:23:30.604 } 00:23:30.604 }, 00:23:30.604 { 00:23:30.604 "method": "bdev_malloc_create", 00:23:30.604 "params": { 00:23:30.604 "name": "malloc0", 00:23:30.604 "num_blocks": 8192, 00:23:30.604 "block_size": 4096, 00:23:30.604 "physical_block_size": 4096, 00:23:30.604 "uuid": "a5db13b0-38c6-4e85-93cf-e0fa910b857c", 00:23:30.604 "optimal_io_boundary": 0, 00:23:30.604 "md_size": 0, 00:23:30.604 "dif_type": 0, 00:23:30.604 "dif_is_head_of_md": false, 00:23:30.604 "dif_pi_format": 0 00:23:30.604 } 00:23:30.604 }, 00:23:30.604 { 00:23:30.604 "method": "bdev_wait_for_examine" 00:23:30.604 } 00:23:30.604 ] 00:23:30.604 }, 00:23:30.604 { 00:23:30.604 "subsystem": "nbd", 00:23:30.604 "config": [] 00:23:30.604 }, 00:23:30.604 { 00:23:30.604 "subsystem": "scheduler", 00:23:30.604 "config": [ 00:23:30.604 { 00:23:30.604 "method": "framework_set_scheduler", 00:23:30.604 "params": { 00:23:30.604 "name": "static" 00:23:30.604 } 00:23:30.604 } 00:23:30.604 ] 00:23:30.604 }, 00:23:30.604 { 00:23:30.604 "subsystem": "nvmf", 00:23:30.604 "config": [ 00:23:30.604 { 00:23:30.604 "method": "nvmf_set_config", 00:23:30.604 "params": { 00:23:30.604 "discovery_filter": "match_any", 00:23:30.604 "admin_cmd_passthru": { 00:23:30.604 "identify_ctrlr": false 00:23:30.604 } 00:23:30.604 } 00:23:30.604 }, 00:23:30.604 { 00:23:30.604 "method": "nvmf_set_max_subsystems", 00:23:30.604 "params": { 00:23:30.604 "max_subsystems": 1024 00:23:30.604 } 00:23:30.604 }, 00:23:30.604 { 00:23:30.604 "method": "nvmf_set_crdt", 00:23:30.604 "params": { 00:23:30.604 "crdt1": 0, 00:23:30.604 "crdt2": 0, 00:23:30.604 "crdt3": 0 00:23:30.604 } 00:23:30.604 }, 00:23:30.604 { 00:23:30.604 "method": "nvmf_create_transport", 00:23:30.604 "params": { 00:23:30.604 "trtype": "TCP", 00:23:30.604 "max_queue_depth": 128, 00:23:30.604 "max_io_qpairs_per_ctrlr": 127, 00:23:30.604 "in_capsule_data_size": 4096, 00:23:30.604 "max_io_size": 131072, 00:23:30.604 "io_unit_size": 131072, 00:23:30.604 "max_aq_depth": 128, 00:23:30.604 "num_shared_buffers": 511, 00:23:30.604 "buf_cache_size": 4294967295, 00:23:30.604 "dif_insert_or_strip": false, 00:23:30.604 "zcopy": false, 00:23:30.604 "c2h_success": false, 00:23:30.604 "sock_priority": 0, 00:23:30.604 "abort_timeout_sec": 1, 00:23:30.604 "ack_timeout": 0, 00:23:30.604 "data_wr_pool_size": 0 00:23:30.604 } 00:23:30.604 }, 00:23:30.604 { 00:23:30.604 "method": "nvmf_create_subsystem", 00:23:30.604 "params": { 00:23:30.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.604 "allow_any_host": false, 00:23:30.604 "serial_number": "SPDK00000000000001", 00:23:30.604 "model_number": "SPDK bdev Controller", 00:23:30.604 "max_namespaces": 10, 00:23:30.604 "min_cntlid": 1, 00:23:30.604 "max_cntlid": 65519, 00:23:30.604 "ana_reporting": false 00:23:30.604 } 00:23:30.604 }, 00:23:30.604 { 00:23:30.604 "method": "nvmf_subsystem_add_host", 00:23:30.604 "params": { 00:23:30.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.604 "host": "nqn.2016-06.io.spdk:host1", 00:23:30.604 "psk": "/tmp/tmp.oDLnr2BK6p" 00:23:30.604 } 00:23:30.604 }, 00:23:30.604 { 00:23:30.604 "method": "nvmf_subsystem_add_ns", 00:23:30.604 "params": { 00:23:30.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.604 "namespace": { 00:23:30.604 "nsid": 1, 00:23:30.604 "bdev_name": "malloc0", 00:23:30.604 "nguid": "A5DB13B038C64E8593CFE0FA910B857C", 00:23:30.604 "uuid": "a5db13b0-38c6-4e85-93cf-e0fa910b857c", 00:23:30.604 "no_auto_visible": false 00:23:30.604 } 00:23:30.604 } 00:23:30.604 }, 00:23:30.604 { 00:23:30.604 "method": "nvmf_subsystem_add_listener", 00:23:30.604 "params": { 00:23:30.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.604 "listen_address": { 00:23:30.604 "trtype": "TCP", 00:23:30.604 "adrfam": "IPv4", 00:23:30.604 "traddr": "10.0.0.2", 00:23:30.604 "trsvcid": "4420" 00:23:30.604 }, 00:23:30.604 "secure_channel": true 00:23:30.604 } 00:23:30.604 } 00:23:30.604 ] 00:23:30.604 } 00:23:30.604 ] 00:23:30.604 }' 00:23:30.604 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:30.604 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.604 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=694975 00:23:30.604 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:30.604 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 694975 00:23:30.604 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 694975 ']' 00:23:30.604 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.604 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.604 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.604 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.604 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.604 [2024-07-26 16:28:50.171571] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:30.604 [2024-07-26 16:28:50.171726] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.604 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.604 [2024-07-26 16:28:50.327692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.864 [2024-07-26 16:28:50.582854] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.864 [2024-07-26 16:28:50.582942] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.864 [2024-07-26 16:28:50.582970] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.864 [2024-07-26 16:28:50.582997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.864 [2024-07-26 16:28:50.583019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.864 [2024-07-26 16:28:50.583181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.429 [2024-07-26 16:28:51.125224] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.429 [2024-07-26 16:28:51.141192] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:31.429 [2024-07-26 16:28:51.157228] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:31.429 [2024-07-26 16:28:51.157526] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.429 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:31.429 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:31.429 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:31.429 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:31.429 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.688 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.688 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=695128 00:23:31.688 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 695128 /var/tmp/bdevperf.sock 00:23:31.688 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 695128 ']' 00:23:31.688 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:31.688 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.688 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:31.688 "subsystems": [ 00:23:31.688 { 00:23:31.688 "subsystem": "keyring", 00:23:31.688 "config": [] 00:23:31.688 }, 00:23:31.688 { 00:23:31.688 "subsystem": "iobuf", 00:23:31.688 "config": [ 00:23:31.688 { 00:23:31.688 "method": "iobuf_set_options", 00:23:31.688 "params": { 00:23:31.688 "small_pool_count": 8192, 00:23:31.688 "large_pool_count": 1024, 00:23:31.688 "small_bufsize": 8192, 00:23:31.688 "large_bufsize": 135168 00:23:31.688 } 00:23:31.688 } 00:23:31.688 ] 00:23:31.688 }, 00:23:31.688 { 00:23:31.688 "subsystem": "sock", 00:23:31.688 "config": [ 00:23:31.688 { 00:23:31.688 "method": "sock_set_default_impl", 00:23:31.688 "params": { 00:23:31.688 "impl_name": "posix" 00:23:31.688 } 00:23:31.688 }, 00:23:31.688 { 00:23:31.688 "method": "sock_impl_set_options", 00:23:31.688 "params": { 00:23:31.688 "impl_name": "ssl", 00:23:31.688 "recv_buf_size": 4096, 00:23:31.688 "send_buf_size": 4096, 00:23:31.688 "enable_recv_pipe": true, 00:23:31.688 "enable_quickack": false, 00:23:31.688 "enable_placement_id": 0, 00:23:31.688 "enable_zerocopy_send_server": true, 00:23:31.688 "enable_zerocopy_send_client": false, 00:23:31.688 "zerocopy_threshold": 0, 00:23:31.688 "tls_version": 0, 00:23:31.688 "enable_ktls": false 00:23:31.688 } 00:23:31.688 }, 00:23:31.688 { 00:23:31.688 "method": "sock_impl_set_options", 00:23:31.688 "params": { 00:23:31.688 "impl_name": "posix", 00:23:31.688 "recv_buf_size": 2097152, 00:23:31.688 "send_buf_size": 2097152, 00:23:31.688 "enable_recv_pipe": true, 00:23:31.688 "enable_quickack": false, 00:23:31.688 "enable_placement_id": 0, 00:23:31.688 "enable_zerocopy_send_server": true, 00:23:31.688 "enable_zerocopy_send_client": false, 00:23:31.688 "zerocopy_threshold": 0, 00:23:31.688 "tls_version": 0, 00:23:31.688 "enable_ktls": false 00:23:31.688 } 00:23:31.688 } 00:23:31.688 ] 00:23:31.688 }, 00:23:31.688 { 00:23:31.688 "subsystem": "vmd", 00:23:31.688 "config": [] 00:23:31.688 }, 00:23:31.688 { 00:23:31.688 "subsystem": "accel", 00:23:31.688 "config": [ 00:23:31.688 { 00:23:31.688 "method": "accel_set_options", 00:23:31.688 "params": { 00:23:31.688 "small_cache_size": 128, 00:23:31.688 "large_cache_size": 16, 00:23:31.688 "task_count": 2048, 00:23:31.688 "sequence_count": 2048, 00:23:31.688 "buf_count": 2048 00:23:31.688 } 00:23:31.688 } 00:23:31.688 ] 00:23:31.688 }, 00:23:31.688 { 00:23:31.688 "subsystem": "bdev", 00:23:31.688 "config": [ 00:23:31.688 { 00:23:31.688 "method": "bdev_set_options", 00:23:31.688 "params": { 00:23:31.688 "bdev_io_pool_size": 65535, 00:23:31.688 "bdev_io_cache_size": 256, 00:23:31.688 "bdev_auto_examine": true, 00:23:31.688 "iobuf_small_cache_size": 128, 00:23:31.688 "iobuf_large_cache_size": 16 00:23:31.688 } 00:23:31.688 }, 00:23:31.688 { 00:23:31.688 "method": "bdev_raid_set_options", 00:23:31.688 "params": { 00:23:31.688 "process_window_size_kb": 1024, 00:23:31.688 "process_max_bandwidth_mb_sec": 0 00:23:31.688 } 00:23:31.688 }, 00:23:31.688 { 00:23:31.688 "method": "bdev_iscsi_set_options", 00:23:31.688 "params": { 00:23:31.688 "timeout_sec": 30 00:23:31.688 } 00:23:31.688 }, 00:23:31.688 { 00:23:31.688 "method": "bdev_nvme_set_options", 00:23:31.688 "params": { 00:23:31.688 "action_on_timeout": "none", 00:23:31.688 "timeout_us": 0, 00:23:31.688 "timeout_admin_us": 0, 00:23:31.688 "keep_alive_timeout_ms": 10000, 00:23:31.688 "arbitration_burst": 0, 00:23:31.688 "low_priority_weight": 0, 00:23:31.688 "medium_priority_weight": 0, 00:23:31.688 "high_priority_weight": 0, 00:23:31.688 "nvme_adminq_poll_period_us": 10000, 00:23:31.688 "nvme_ioq_poll_period_us": 0, 00:23:31.688 "io_queue_requests": 512, 00:23:31.688 "delay_cmd_submit": true, 00:23:31.688 "transport_retry_count": 4, 00:23:31.688 "bdev_retry_count": 3, 00:23:31.688 "transport_ack_timeout": 0, 00:23:31.688 "ctrlr_loss_timeout_sec": 0, 00:23:31.688 "reconnect_delay_sec": 0, 00:23:31.688 "fast_io_fail_timeout_sec": 0, 00:23:31.688 "disable_auto_failback": false, 00:23:31.688 "generate_uuids": false, 00:23:31.688 "transport_tos": 0, 00:23:31.688 "nvme_error_stat": false, 00:23:31.688 "rdma_srq_size": 0, 00:23:31.688 "io_path_stat": false, 00:23:31.688 "allow_accel_sequence": false, 00:23:31.688 "rdma_max_cq_size": 0, 00:23:31.688 "rdma_cm_event_timeout_ms": 0, 00:23:31.688 "dhchap_digests": [ 00:23:31.688 "sha256", 00:23:31.688 "sha384", 00:23:31.688 "sha512" 00:23:31.688 ], 00:23:31.688 "dhchap_dhgroups": [ 00:23:31.688 "null", 00:23:31.688 "ffdhe2048", 00:23:31.688 "ffdhe3072", 00:23:31.688 "ffdhe4096", 00:23:31.688 "ffdhe6144", 00:23:31.688 "ffdhe8192" 00:23:31.688 ] 00:23:31.688 } 00:23:31.688 }, 00:23:31.688 { 00:23:31.688 "method": "bdev_nvme_attach_controller", 00:23:31.688 "params": { 00:23:31.688 "name": "TLSTEST", 00:23:31.688 "trtype": "TCP", 00:23:31.688 "adrfam": "IPv4", 00:23:31.688 "traddr": "10.0.0.2", 00:23:31.688 "trsvcid": "4420", 00:23:31.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.688 "prchk_reftag": false, 00:23:31.688 "prchk_guard": false, 00:23:31.688 "ctrlr_loss_timeout_sec": 0, 00:23:31.688 "reconnect_delay_sec": 0, 00:23:31.688 "fast_io_fail_timeout_sec": 0, 00:23:31.689 "psk": "/tmp/tmp.oDLnr2BK6p", 00:23:31.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.689 "hdgst": false, 00:23:31.689 "ddgst": false 00:23:31.689 } 00:23:31.689 }, 00:23:31.689 { 00:23:31.689 "method": "bdev_nvme_set_hotplug", 00:23:31.689 "params": { 00:23:31.689 "period_us": 100000, 00:23:31.689 "enable": false 00:23:31.689 } 00:23:31.689 }, 00:23:31.689 { 00:23:31.689 "method": "bdev_wait_for_examine" 00:23:31.689 } 00:23:31.689 ] 00:23:31.689 }, 00:23:31.689 { 00:23:31.689 "subsystem": "nbd", 00:23:31.689 "config": [] 00:23:31.689 } 00:23:31.689 ] 00:23:31.689 }' 00:23:31.689 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:31.689 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.689 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:31.689 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.689 [2024-07-26 16:28:51.288820] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:31.689 [2024-07-26 16:28:51.288969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695128 ] 00:23:31.689 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.689 [2024-07-26 16:28:51.414971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.947 [2024-07-26 16:28:51.642076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.513 [2024-07-26 16:28:52.022035] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:32.513 [2024-07-26 16:28:52.022221] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:32.513 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:32.513 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:32.513 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:32.772 Running I/O for 10 seconds... 00:23:42.749 00:23:42.749 Latency(us) 00:23:42.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.749 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:42.749 Verification LBA range: start 0x0 length 0x2000 00:23:42.749 TLSTESTn1 : 10.05 2471.75 9.66 0.00 0.00 51634.61 10243.03 73011.96 00:23:42.749 =================================================================================================================== 00:23:42.749 Total : 2471.75 9.66 0.00 0.00 51634.61 10243.03 73011.96 00:23:42.749 0 00:23:42.749 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:42.749 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 695128 00:23:42.749 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 695128 ']' 00:23:42.749 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 695128 00:23:42.749 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:42.749 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:42.749 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 695128 00:23:42.749 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:42.749 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:42.749 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 695128' 00:23:42.749 killing process with pid 695128 00:23:42.749 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 695128 00:23:42.749 Received shutdown signal, test time was about 10.000000 seconds 00:23:42.749 00:23:42.749 Latency(us) 00:23:42.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.749 =================================================================================================================== 00:23:42.749 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:42.749 [2024-07-26 16:29:02.499289] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:42.749 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 695128 00:23:43.691 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 694975 00:23:43.691 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 694975 ']' 00:23:43.692 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 694975 00:23:43.692 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:43.692 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:43.692 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 694975 00:23:43.954 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:43.954 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:43.954 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 694975' 00:23:43.954 killing process with pid 694975 00:23:43.954 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 694975 00:23:43.954 [2024-07-26 16:29:03.467709] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 694975 00:23:43.954 removal in v24.09 hit 1 times 00:23:45.333 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:45.333 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:45.333 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:45.333 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.333 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=696732 00:23:45.333 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:45.333 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 696732 00:23:45.333 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 696732 ']' 00:23:45.333 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.333 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:45.333 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.333 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:45.334 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.334 [2024-07-26 16:29:05.014660] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:45.334 [2024-07-26 16:29:05.014807] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.334 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.593 [2024-07-26 16:29:05.144291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.853 [2024-07-26 16:29:05.390837] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.853 [2024-07-26 16:29:05.390915] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.853 [2024-07-26 16:29:05.390943] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.853 [2024-07-26 16:29:05.390969] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.853 [2024-07-26 16:29:05.390993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.853 [2024-07-26 16:29:05.391042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.420 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.420 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:46.420 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:46.420 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:46.420 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.420 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.420 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.oDLnr2BK6p 00:23:46.420 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.oDLnr2BK6p 00:23:46.420 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:46.678 [2024-07-26 16:29:06.196275] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.678 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:46.936 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:47.194 [2024-07-26 16:29:06.757845] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.194 [2024-07-26 16:29:06.758185] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.194 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:47.481 malloc0 00:23:47.481 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:47.751 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oDLnr2BK6p 00:23:48.009 [2024-07-26 16:29:07.618579] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:48.009 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=697121 00:23:48.009 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:48.009 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.009 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 697121 /var/tmp/bdevperf.sock 00:23:48.009 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 697121 ']' 00:23:48.009 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.009 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.009 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.009 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.009 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.009 [2024-07-26 16:29:07.717266] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:48.009 [2024-07-26 16:29:07.717430] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697121 ] 00:23:48.268 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.268 [2024-07-26 16:29:07.846164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.528 [2024-07-26 16:29:08.105134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.095 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:49.095 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:49.095 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oDLnr2BK6p 00:23:49.353 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:49.612 [2024-07-26 16:29:09.232939] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.612 nvme0n1 00:23:49.612 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:49.872 Running I/O for 1 seconds... 00:23:50.810 00:23:50.810 Latency(us) 00:23:50.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.810 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:50.810 Verification LBA range: start 0x0 length 0x2000 00:23:50.810 nvme0n1 : 1.04 2478.47 9.68 0.00 0.00 50596.98 10243.03 76895.57 00:23:50.810 =================================================================================================================== 00:23:50.810 Total : 2478.47 9.68 0.00 0.00 50596.98 10243.03 76895.57 00:23:50.810 0 00:23:50.810 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 697121 00:23:50.810 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 697121 ']' 00:23:50.810 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 697121 00:23:50.810 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:50.810 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:50.810 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 697121 00:23:50.810 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:50.810 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:50.810 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 697121' 00:23:50.810 killing process with pid 697121 00:23:50.810 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 697121 00:23:50.810 Received shutdown signal, test time was about 1.000000 seconds 00:23:50.810 00:23:50.810 Latency(us) 00:23:50.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.810 =================================================================================================================== 00:23:50.810 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.810 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 697121 00:23:52.192 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 696732 00:23:52.192 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 696732 ']' 00:23:52.192 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 696732 00:23:52.192 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:52.192 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.192 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 696732 00:23:52.192 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:52.192 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:52.192 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 696732' 00:23:52.192 killing process with pid 696732 00:23:52.192 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 696732 00:23:52.192 [2024-07-26 16:29:11.648878] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:52.192 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 696732 00:23:53.576 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:23:53.576 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:53.576 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:53.576 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.576 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=697793 00:23:53.576 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:53.576 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 697793 00:23:53.576 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 697793 ']' 00:23:53.576 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.576 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:53.576 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.576 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:53.576 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.576 [2024-07-26 16:29:13.045887] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:53.576 [2024-07-26 16:29:13.046048] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.576 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.576 [2024-07-26 16:29:13.186586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.835 [2024-07-26 16:29:13.441871] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.835 [2024-07-26 16:29:13.441958] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.835 [2024-07-26 16:29:13.441987] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.835 [2024-07-26 16:29:13.442011] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.835 [2024-07-26 16:29:13.442033] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.835 [2024-07-26 16:29:13.442094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.402 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.402 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:54.402 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:54.402 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:54.402 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.402 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.402 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:23:54.402 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.402 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.402 [2024-07-26 16:29:13.990763] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.402 malloc0 00:23:54.402 [2024-07-26 16:29:14.065847] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.402 [2024-07-26 16:29:14.066234] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.402 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.402 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=697949 00:23:54.402 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:54.402 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 697949 /var/tmp/bdevperf.sock 00:23:54.402 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 697949 ']' 00:23:54.402 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.403 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.403 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.403 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.403 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.660 [2024-07-26 16:29:14.172231] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:54.660 [2024-07-26 16:29:14.172374] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697949 ] 00:23:54.660 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.660 [2024-07-26 16:29:14.298985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.920 [2024-07-26 16:29:14.555523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.486 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.486 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:55.486 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oDLnr2BK6p 00:23:55.743 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:56.001 [2024-07-26 16:29:15.560960] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.001 nvme0n1 00:23:56.001 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:56.261 Running I/O for 1 seconds... 00:23:57.198 00:23:57.198 Latency(us) 00:23:57.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.198 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:57.198 Verification LBA range: start 0x0 length 0x2000 00:23:57.198 nvme0n1 : 1.05 2509.90 9.80 0.00 0.00 49937.12 8107.05 69128.34 00:23:57.198 =================================================================================================================== 00:23:57.198 Total : 2509.90 9.80 0.00 0.00 49937.12 8107.05 69128.34 00:23:57.198 0 00:23:57.198 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:23:57.198 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.198 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.198 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.198 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:23:57.198 "subsystems": [ 00:23:57.198 { 00:23:57.198 "subsystem": "keyring", 00:23:57.198 "config": [ 00:23:57.198 { 00:23:57.198 "method": "keyring_file_add_key", 00:23:57.198 "params": { 00:23:57.198 "name": "key0", 00:23:57.198 "path": "/tmp/tmp.oDLnr2BK6p" 00:23:57.198 } 00:23:57.198 } 00:23:57.198 ] 00:23:57.198 }, 00:23:57.198 { 00:23:57.198 "subsystem": "iobuf", 00:23:57.198 "config": [ 00:23:57.198 { 00:23:57.198 "method": "iobuf_set_options", 00:23:57.198 "params": { 00:23:57.198 "small_pool_count": 8192, 00:23:57.198 "large_pool_count": 1024, 00:23:57.198 "small_bufsize": 8192, 00:23:57.198 "large_bufsize": 135168 00:23:57.198 } 00:23:57.198 } 00:23:57.198 ] 00:23:57.198 }, 00:23:57.198 { 00:23:57.198 "subsystem": "sock", 00:23:57.198 "config": [ 00:23:57.198 { 00:23:57.198 "method": "sock_set_default_impl", 00:23:57.198 "params": { 00:23:57.198 "impl_name": "posix" 00:23:57.198 } 00:23:57.198 }, 00:23:57.198 { 00:23:57.198 "method": "sock_impl_set_options", 00:23:57.198 "params": { 00:23:57.198 "impl_name": "ssl", 00:23:57.198 "recv_buf_size": 4096, 00:23:57.198 "send_buf_size": 4096, 00:23:57.198 "enable_recv_pipe": true, 00:23:57.198 "enable_quickack": false, 00:23:57.198 "enable_placement_id": 0, 00:23:57.198 "enable_zerocopy_send_server": true, 00:23:57.198 "enable_zerocopy_send_client": false, 00:23:57.198 "zerocopy_threshold": 0, 00:23:57.198 "tls_version": 0, 00:23:57.198 "enable_ktls": false 00:23:57.198 } 00:23:57.198 }, 00:23:57.198 { 00:23:57.198 "method": "sock_impl_set_options", 00:23:57.198 "params": { 00:23:57.198 "impl_name": "posix", 00:23:57.198 "recv_buf_size": 2097152, 00:23:57.198 "send_buf_size": 2097152, 00:23:57.198 "enable_recv_pipe": true, 00:23:57.198 "enable_quickack": false, 00:23:57.198 "enable_placement_id": 0, 00:23:57.198 "enable_zerocopy_send_server": true, 00:23:57.198 "enable_zerocopy_send_client": false, 00:23:57.198 "zerocopy_threshold": 0, 00:23:57.198 "tls_version": 0, 00:23:57.198 "enable_ktls": false 00:23:57.198 } 00:23:57.198 } 00:23:57.198 ] 00:23:57.198 }, 00:23:57.198 { 00:23:57.198 "subsystem": "vmd", 00:23:57.198 "config": [] 00:23:57.198 }, 00:23:57.198 { 00:23:57.198 "subsystem": "accel", 00:23:57.198 "config": [ 00:23:57.198 { 00:23:57.198 "method": "accel_set_options", 00:23:57.198 "params": { 00:23:57.198 "small_cache_size": 128, 00:23:57.198 "large_cache_size": 16, 00:23:57.198 "task_count": 2048, 00:23:57.198 "sequence_count": 2048, 00:23:57.198 "buf_count": 2048 00:23:57.198 } 00:23:57.198 } 00:23:57.198 ] 00:23:57.198 }, 00:23:57.198 { 00:23:57.198 "subsystem": "bdev", 00:23:57.198 "config": [ 00:23:57.198 { 00:23:57.198 "method": "bdev_set_options", 00:23:57.198 "params": { 00:23:57.198 "bdev_io_pool_size": 65535, 00:23:57.198 "bdev_io_cache_size": 256, 00:23:57.198 "bdev_auto_examine": true, 00:23:57.198 "iobuf_small_cache_size": 128, 00:23:57.198 "iobuf_large_cache_size": 16 00:23:57.198 } 00:23:57.198 }, 00:23:57.198 { 00:23:57.198 "method": "bdev_raid_set_options", 00:23:57.198 "params": { 00:23:57.198 "process_window_size_kb": 1024, 00:23:57.198 "process_max_bandwidth_mb_sec": 0 00:23:57.198 } 00:23:57.198 }, 00:23:57.198 { 00:23:57.198 "method": "bdev_iscsi_set_options", 00:23:57.198 "params": { 00:23:57.198 "timeout_sec": 30 00:23:57.198 } 00:23:57.198 }, 00:23:57.198 { 00:23:57.198 "method": "bdev_nvme_set_options", 00:23:57.198 "params": { 00:23:57.198 "action_on_timeout": "none", 00:23:57.198 "timeout_us": 0, 00:23:57.198 "timeout_admin_us": 0, 00:23:57.198 "keep_alive_timeout_ms": 10000, 00:23:57.198 "arbitration_burst": 0, 00:23:57.198 "low_priority_weight": 0, 00:23:57.198 "medium_priority_weight": 0, 00:23:57.198 "high_priority_weight": 0, 00:23:57.198 "nvme_adminq_poll_period_us": 10000, 00:23:57.198 "nvme_ioq_poll_period_us": 0, 00:23:57.198 "io_queue_requests": 0, 00:23:57.198 "delay_cmd_submit": true, 00:23:57.198 "transport_retry_count": 4, 00:23:57.198 "bdev_retry_count": 3, 00:23:57.198 "transport_ack_timeout": 0, 00:23:57.198 "ctrlr_loss_timeout_sec": 0, 00:23:57.198 "reconnect_delay_sec": 0, 00:23:57.198 "fast_io_fail_timeout_sec": 0, 00:23:57.198 "disable_auto_failback": false, 00:23:57.198 "generate_uuids": false, 00:23:57.198 "transport_tos": 0, 00:23:57.198 "nvme_error_stat": false, 00:23:57.198 "rdma_srq_size": 0, 00:23:57.198 "io_path_stat": false, 00:23:57.198 "allow_accel_sequence": false, 00:23:57.198 "rdma_max_cq_size": 0, 00:23:57.198 "rdma_cm_event_timeout_ms": 0, 00:23:57.198 "dhchap_digests": [ 00:23:57.198 "sha256", 00:23:57.198 "sha384", 00:23:57.198 "sha512" 00:23:57.198 ], 00:23:57.198 "dhchap_dhgroups": [ 00:23:57.198 "null", 00:23:57.198 "ffdhe2048", 00:23:57.198 "ffdhe3072", 00:23:57.198 "ffdhe4096", 00:23:57.198 "ffdhe6144", 00:23:57.198 "ffdhe8192" 00:23:57.198 ] 00:23:57.198 } 00:23:57.198 }, 00:23:57.198 { 00:23:57.199 "method": "bdev_nvme_set_hotplug", 00:23:57.199 "params": { 00:23:57.199 "period_us": 100000, 00:23:57.199 "enable": false 00:23:57.199 } 00:23:57.199 }, 00:23:57.199 { 00:23:57.199 "method": "bdev_malloc_create", 00:23:57.199 "params": { 00:23:57.199 "name": "malloc0", 00:23:57.199 "num_blocks": 8192, 00:23:57.199 "block_size": 4096, 00:23:57.199 "physical_block_size": 4096, 00:23:57.199 "uuid": "8246d183-299e-41ca-b804-2c60f7392641", 00:23:57.199 "optimal_io_boundary": 0, 00:23:57.199 "md_size": 0, 00:23:57.199 "dif_type": 0, 00:23:57.199 "dif_is_head_of_md": false, 00:23:57.199 "dif_pi_format": 0 00:23:57.199 } 00:23:57.199 }, 00:23:57.199 { 00:23:57.199 "method": "bdev_wait_for_examine" 00:23:57.199 } 00:23:57.199 ] 00:23:57.199 }, 00:23:57.199 { 00:23:57.199 "subsystem": "nbd", 00:23:57.199 "config": [] 00:23:57.199 }, 00:23:57.199 { 00:23:57.199 "subsystem": "scheduler", 00:23:57.199 "config": [ 00:23:57.199 { 00:23:57.199 "method": "framework_set_scheduler", 00:23:57.199 "params": { 00:23:57.199 "name": "static" 00:23:57.199 } 00:23:57.199 } 00:23:57.199 ] 00:23:57.199 }, 00:23:57.199 { 00:23:57.199 "subsystem": "nvmf", 00:23:57.199 "config": [ 00:23:57.199 { 00:23:57.199 "method": "nvmf_set_config", 00:23:57.199 "params": { 00:23:57.199 "discovery_filter": "match_any", 00:23:57.199 "admin_cmd_passthru": { 00:23:57.199 "identify_ctrlr": false 00:23:57.199 } 00:23:57.199 } 00:23:57.199 }, 00:23:57.199 { 00:23:57.199 "method": "nvmf_set_max_subsystems", 00:23:57.199 "params": { 00:23:57.199 "max_subsystems": 1024 00:23:57.199 } 00:23:57.199 }, 00:23:57.199 { 00:23:57.199 "method": "nvmf_set_crdt", 00:23:57.199 "params": { 00:23:57.199 "crdt1": 0, 00:23:57.199 "crdt2": 0, 00:23:57.199 "crdt3": 0 00:23:57.199 } 00:23:57.199 }, 00:23:57.199 { 00:23:57.199 "method": "nvmf_create_transport", 00:23:57.199 "params": { 00:23:57.199 "trtype": "TCP", 00:23:57.199 "max_queue_depth": 128, 00:23:57.199 "max_io_qpairs_per_ctrlr": 127, 00:23:57.199 "in_capsule_data_size": 4096, 00:23:57.199 "max_io_size": 131072, 00:23:57.199 "io_unit_size": 131072, 00:23:57.199 "max_aq_depth": 128, 00:23:57.199 "num_shared_buffers": 511, 00:23:57.199 "buf_cache_size": 4294967295, 00:23:57.199 "dif_insert_or_strip": false, 00:23:57.199 "zcopy": false, 00:23:57.199 "c2h_success": false, 00:23:57.199 "sock_priority": 0, 00:23:57.199 "abort_timeout_sec": 1, 00:23:57.199 "ack_timeout": 0, 00:23:57.199 "data_wr_pool_size": 0 00:23:57.199 } 00:23:57.199 }, 00:23:57.199 { 00:23:57.199 "method": "nvmf_create_subsystem", 00:23:57.199 "params": { 00:23:57.199 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.199 "allow_any_host": false, 00:23:57.199 "serial_number": "00000000000000000000", 00:23:57.199 "model_number": "SPDK bdev Controller", 00:23:57.199 "max_namespaces": 32, 00:23:57.199 "min_cntlid": 1, 00:23:57.199 "max_cntlid": 65519, 00:23:57.199 "ana_reporting": false 00:23:57.199 } 00:23:57.199 }, 00:23:57.199 { 00:23:57.199 "method": "nvmf_subsystem_add_host", 00:23:57.199 "params": { 00:23:57.199 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.199 "host": "nqn.2016-06.io.spdk:host1", 00:23:57.199 "psk": "key0" 00:23:57.199 } 00:23:57.199 }, 00:23:57.199 { 00:23:57.199 "method": "nvmf_subsystem_add_ns", 00:23:57.199 "params": { 00:23:57.199 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.199 "namespace": { 00:23:57.199 "nsid": 1, 00:23:57.199 "bdev_name": "malloc0", 00:23:57.199 "nguid": "8246D183299E41CAB8042C60F7392641", 00:23:57.199 "uuid": "8246d183-299e-41ca-b804-2c60f7392641", 00:23:57.199 "no_auto_visible": false 00:23:57.199 } 00:23:57.199 } 00:23:57.199 }, 00:23:57.199 { 00:23:57.199 "method": "nvmf_subsystem_add_listener", 00:23:57.199 "params": { 00:23:57.199 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.199 "listen_address": { 00:23:57.199 "trtype": "TCP", 00:23:57.199 "adrfam": "IPv4", 00:23:57.199 "traddr": "10.0.0.2", 00:23:57.199 "trsvcid": "4420" 00:23:57.199 }, 00:23:57.199 "secure_channel": false, 00:23:57.199 "sock_impl": "ssl" 00:23:57.199 } 00:23:57.199 } 00:23:57.199 ] 00:23:57.199 } 00:23:57.199 ] 00:23:57.199 }' 00:23:57.199 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:57.768 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:23:57.768 "subsystems": [ 00:23:57.768 { 00:23:57.768 "subsystem": "keyring", 00:23:57.768 "config": [ 00:23:57.768 { 00:23:57.768 "method": "keyring_file_add_key", 00:23:57.768 "params": { 00:23:57.768 "name": "key0", 00:23:57.768 "path": "/tmp/tmp.oDLnr2BK6p" 00:23:57.768 } 00:23:57.768 } 00:23:57.768 ] 00:23:57.768 }, 00:23:57.768 { 00:23:57.768 "subsystem": "iobuf", 00:23:57.768 "config": [ 00:23:57.768 { 00:23:57.768 "method": "iobuf_set_options", 00:23:57.768 "params": { 00:23:57.768 "small_pool_count": 8192, 00:23:57.768 "large_pool_count": 1024, 00:23:57.768 "small_bufsize": 8192, 00:23:57.768 "large_bufsize": 135168 00:23:57.768 } 00:23:57.768 } 00:23:57.768 ] 00:23:57.768 }, 00:23:57.768 { 00:23:57.768 "subsystem": "sock", 00:23:57.768 "config": [ 00:23:57.768 { 00:23:57.768 "method": "sock_set_default_impl", 00:23:57.768 "params": { 00:23:57.768 "impl_name": "posix" 00:23:57.768 } 00:23:57.768 }, 00:23:57.768 { 00:23:57.768 "method": "sock_impl_set_options", 00:23:57.768 "params": { 00:23:57.768 "impl_name": "ssl", 00:23:57.768 "recv_buf_size": 4096, 00:23:57.768 "send_buf_size": 4096, 00:23:57.768 "enable_recv_pipe": true, 00:23:57.768 "enable_quickack": false, 00:23:57.768 "enable_placement_id": 0, 00:23:57.768 "enable_zerocopy_send_server": true, 00:23:57.768 "enable_zerocopy_send_client": false, 00:23:57.768 "zerocopy_threshold": 0, 00:23:57.768 "tls_version": 0, 00:23:57.768 "enable_ktls": false 00:23:57.768 } 00:23:57.768 }, 00:23:57.768 { 00:23:57.768 "method": "sock_impl_set_options", 00:23:57.768 "params": { 00:23:57.768 "impl_name": "posix", 00:23:57.768 "recv_buf_size": 2097152, 00:23:57.768 "send_buf_size": 2097152, 00:23:57.768 "enable_recv_pipe": true, 00:23:57.768 "enable_quickack": false, 00:23:57.768 "enable_placement_id": 0, 00:23:57.768 "enable_zerocopy_send_server": true, 00:23:57.768 "enable_zerocopy_send_client": false, 00:23:57.768 "zerocopy_threshold": 0, 00:23:57.768 "tls_version": 0, 00:23:57.768 "enable_ktls": false 00:23:57.768 } 00:23:57.768 } 00:23:57.768 ] 00:23:57.768 }, 00:23:57.768 { 00:23:57.768 "subsystem": "vmd", 00:23:57.768 "config": [] 00:23:57.768 }, 00:23:57.768 { 00:23:57.768 "subsystem": "accel", 00:23:57.768 "config": [ 00:23:57.768 { 00:23:57.768 "method": "accel_set_options", 00:23:57.768 "params": { 00:23:57.768 "small_cache_size": 128, 00:23:57.768 "large_cache_size": 16, 00:23:57.768 "task_count": 2048, 00:23:57.768 "sequence_count": 2048, 00:23:57.768 "buf_count": 2048 00:23:57.768 } 00:23:57.768 } 00:23:57.768 ] 00:23:57.768 }, 00:23:57.768 { 00:23:57.768 "subsystem": "bdev", 00:23:57.768 "config": [ 00:23:57.768 { 00:23:57.768 "method": "bdev_set_options", 00:23:57.768 "params": { 00:23:57.768 "bdev_io_pool_size": 65535, 00:23:57.768 "bdev_io_cache_size": 256, 00:23:57.768 "bdev_auto_examine": true, 00:23:57.768 "iobuf_small_cache_size": 128, 00:23:57.768 "iobuf_large_cache_size": 16 00:23:57.768 } 00:23:57.768 }, 00:23:57.768 { 00:23:57.768 "method": "bdev_raid_set_options", 00:23:57.768 "params": { 00:23:57.768 "process_window_size_kb": 1024, 00:23:57.768 "process_max_bandwidth_mb_sec": 0 00:23:57.768 } 00:23:57.768 }, 00:23:57.768 { 00:23:57.768 "method": "bdev_iscsi_set_options", 00:23:57.768 "params": { 00:23:57.768 "timeout_sec": 30 00:23:57.768 } 00:23:57.768 }, 00:23:57.768 { 00:23:57.768 "method": "bdev_nvme_set_options", 00:23:57.768 "params": { 00:23:57.768 "action_on_timeout": "none", 00:23:57.768 "timeout_us": 0, 00:23:57.768 "timeout_admin_us": 0, 00:23:57.768 "keep_alive_timeout_ms": 10000, 00:23:57.768 "arbitration_burst": 0, 00:23:57.768 "low_priority_weight": 0, 00:23:57.768 "medium_priority_weight": 0, 00:23:57.768 "high_priority_weight": 0, 00:23:57.768 "nvme_adminq_poll_period_us": 10000, 00:23:57.768 "nvme_ioq_poll_period_us": 0, 00:23:57.768 "io_queue_requests": 512, 00:23:57.768 "delay_cmd_submit": true, 00:23:57.768 "transport_retry_count": 4, 00:23:57.768 "bdev_retry_count": 3, 00:23:57.768 "transport_ack_timeout": 0, 00:23:57.768 "ctrlr_loss_timeout_sec": 0, 00:23:57.768 "reconnect_delay_sec": 0, 00:23:57.768 "fast_io_fail_timeout_sec": 0, 00:23:57.768 "disable_auto_failback": false, 00:23:57.768 "generate_uuids": false, 00:23:57.768 "transport_tos": 0, 00:23:57.768 "nvme_error_stat": false, 00:23:57.768 "rdma_srq_size": 0, 00:23:57.768 "io_path_stat": false, 00:23:57.768 "allow_accel_sequence": false, 00:23:57.768 "rdma_max_cq_size": 0, 00:23:57.768 "rdma_cm_event_timeout_ms": 0, 00:23:57.768 "dhchap_digests": [ 00:23:57.768 "sha256", 00:23:57.768 "sha384", 00:23:57.768 "sha512" 00:23:57.768 ], 00:23:57.768 "dhchap_dhgroups": [ 00:23:57.768 "null", 00:23:57.768 "ffdhe2048", 00:23:57.768 "ffdhe3072", 00:23:57.768 "ffdhe4096", 00:23:57.768 "ffdhe6144", 00:23:57.768 "ffdhe8192" 00:23:57.768 ] 00:23:57.768 } 00:23:57.768 }, 00:23:57.768 { 00:23:57.768 "method": "bdev_nvme_attach_controller", 00:23:57.768 "params": { 00:23:57.768 "name": "nvme0", 00:23:57.768 "trtype": "TCP", 00:23:57.768 "adrfam": "IPv4", 00:23:57.768 "traddr": "10.0.0.2", 00:23:57.768 "trsvcid": "4420", 00:23:57.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.768 "prchk_reftag": false, 00:23:57.768 "prchk_guard": false, 00:23:57.768 "ctrlr_loss_timeout_sec": 0, 00:23:57.769 "reconnect_delay_sec": 0, 00:23:57.769 "fast_io_fail_timeout_sec": 0, 00:23:57.769 "psk": "key0", 00:23:57.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.769 "hdgst": false, 00:23:57.769 "ddgst": false 00:23:57.769 } 00:23:57.769 }, 00:23:57.769 { 00:23:57.769 "method": "bdev_nvme_set_hotplug", 00:23:57.769 "params": { 00:23:57.769 "period_us": 100000, 00:23:57.769 "enable": false 00:23:57.769 } 00:23:57.769 }, 00:23:57.769 { 00:23:57.769 "method": "bdev_enable_histogram", 00:23:57.769 "params": { 00:23:57.769 "name": "nvme0n1", 00:23:57.769 "enable": true 00:23:57.769 } 00:23:57.769 }, 00:23:57.769 { 00:23:57.769 "method": "bdev_wait_for_examine" 00:23:57.769 } 00:23:57.769 ] 00:23:57.769 }, 00:23:57.769 { 00:23:57.769 "subsystem": "nbd", 00:23:57.769 "config": [] 00:23:57.769 } 00:23:57.769 ] 00:23:57.769 }' 00:23:57.769 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 697949 00:23:57.769 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 697949 ']' 00:23:57.769 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 697949 00:23:57.769 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:57.769 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.769 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 697949 00:23:57.769 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:57.769 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:57.769 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 697949' 00:23:57.769 killing process with pid 697949 00:23:57.769 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 697949 00:23:57.769 Received shutdown signal, test time was about 1.000000 seconds 00:23:57.769 00:23:57.769 Latency(us) 00:23:57.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.769 =================================================================================================================== 00:23:57.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.769 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 697949 00:23:58.709 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 697793 00:23:58.709 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 697793 ']' 00:23:58.709 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 697793 00:23:58.709 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:58.709 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:58.709 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 697793 00:23:58.709 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:58.709 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:58.709 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 697793' 00:23:58.709 killing process with pid 697793 00:23:58.709 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 697793 00:23:58.709 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 697793 00:24:00.091 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:24:00.091 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:24:00.091 "subsystems": [ 00:24:00.091 { 00:24:00.091 "subsystem": "keyring", 00:24:00.091 "config": [ 00:24:00.091 { 00:24:00.091 "method": "keyring_file_add_key", 00:24:00.091 "params": { 00:24:00.091 "name": "key0", 00:24:00.091 "path": "/tmp/tmp.oDLnr2BK6p" 00:24:00.091 } 00:24:00.091 } 00:24:00.091 ] 00:24:00.091 }, 00:24:00.091 { 00:24:00.091 "subsystem": "iobuf", 00:24:00.091 "config": [ 00:24:00.091 { 00:24:00.091 "method": "iobuf_set_options", 00:24:00.091 "params": { 00:24:00.091 "small_pool_count": 8192, 00:24:00.091 "large_pool_count": 1024, 00:24:00.091 "small_bufsize": 8192, 00:24:00.091 "large_bufsize": 135168 00:24:00.091 } 00:24:00.091 } 00:24:00.091 ] 00:24:00.091 }, 00:24:00.091 { 00:24:00.091 "subsystem": "sock", 00:24:00.091 "config": [ 00:24:00.091 { 00:24:00.091 "method": "sock_set_default_impl", 00:24:00.091 "params": { 00:24:00.091 "impl_name": "posix" 00:24:00.091 } 00:24:00.091 }, 00:24:00.091 { 00:24:00.091 "method": "sock_impl_set_options", 00:24:00.091 "params": { 00:24:00.091 "impl_name": "ssl", 00:24:00.091 "recv_buf_size": 4096, 00:24:00.091 "send_buf_size": 4096, 00:24:00.091 "enable_recv_pipe": true, 00:24:00.091 "enable_quickack": false, 00:24:00.091 "enable_placement_id": 0, 00:24:00.091 "enable_zerocopy_send_server": true, 00:24:00.091 "enable_zerocopy_send_client": false, 00:24:00.091 "zerocopy_threshold": 0, 00:24:00.091 "tls_version": 0, 00:24:00.091 "enable_ktls": false 00:24:00.091 } 00:24:00.091 }, 00:24:00.091 { 00:24:00.091 "method": "sock_impl_set_options", 00:24:00.091 "params": { 00:24:00.091 "impl_name": "posix", 00:24:00.091 "recv_buf_size": 2097152, 00:24:00.091 "send_buf_size": 2097152, 00:24:00.091 "enable_recv_pipe": true, 00:24:00.091 "enable_quickack": false, 00:24:00.091 "enable_placement_id": 0, 00:24:00.091 "enable_zerocopy_send_server": true, 00:24:00.091 "enable_zerocopy_send_client": false, 00:24:00.091 "zerocopy_threshold": 0, 00:24:00.091 "tls_version": 0, 00:24:00.091 "enable_ktls": false 00:24:00.091 } 00:24:00.091 } 00:24:00.091 ] 00:24:00.091 }, 00:24:00.091 { 00:24:00.091 "subsystem": "vmd", 00:24:00.091 "config": [] 00:24:00.091 }, 00:24:00.091 { 00:24:00.091 "subsystem": "accel", 00:24:00.091 "config": [ 00:24:00.091 { 00:24:00.091 "method": "accel_set_options", 00:24:00.091 "params": { 00:24:00.091 "small_cache_size": 128, 00:24:00.091 "large_cache_size": 16, 00:24:00.091 "task_count": 2048, 00:24:00.091 "sequence_count": 2048, 00:24:00.091 "buf_count": 2048 00:24:00.091 } 00:24:00.091 } 00:24:00.091 ] 00:24:00.091 }, 00:24:00.091 { 00:24:00.091 "subsystem": "bdev", 00:24:00.091 "config": [ 00:24:00.091 { 00:24:00.091 "method": "bdev_set_options", 00:24:00.091 "params": { 00:24:00.091 "bdev_io_pool_size": 65535, 00:24:00.091 "bdev_io_cache_size": 256, 00:24:00.091 "bdev_auto_examine": true, 00:24:00.091 "iobuf_small_cache_size": 128, 00:24:00.091 "iobuf_large_cache_size": 16 00:24:00.091 } 00:24:00.091 }, 00:24:00.091 { 00:24:00.091 "method": "bdev_raid_set_options", 00:24:00.091 "params": { 00:24:00.091 "process_window_size_kb": 1024, 00:24:00.091 "process_max_bandwidth_mb_sec": 0 00:24:00.091 } 00:24:00.091 }, 00:24:00.091 { 00:24:00.091 "method": "bdev_iscsi_set_options", 00:24:00.091 "params": { 00:24:00.091 "timeout_sec": 30 00:24:00.091 } 00:24:00.091 }, 00:24:00.091 { 00:24:00.092 "method": "bdev_nvme_set_options", 00:24:00.092 "params": { 00:24:00.092 "action_on_timeout": "none", 00:24:00.092 "timeout_us": 0, 00:24:00.092 "timeout_admin_us": 0, 00:24:00.092 "keep_alive_timeout_ms": 10000, 00:24:00.092 "arbitration_burst": 0, 00:24:00.092 "low_priority_weight": 0, 00:24:00.092 "medium_priority_weight": 0, 00:24:00.092 "high_priority_weight": 0, 00:24:00.092 "nvme_adminq_poll_period_us": 10000, 00:24:00.092 "nvme_ioq_poll_period_us": 0, 00:24:00.092 "io_queue_requests": 0, 00:24:00.092 "delay_cmd_submit": true, 00:24:00.092 "transport_retry_count": 4, 00:24:00.092 "bdev_retry_count": 3, 00:24:00.092 "transport_ack_timeout": 0, 00:24:00.092 "ctrlr_loss_timeout_sec": 0, 00:24:00.092 "reconnect_delay_sec": 0, 00:24:00.092 "fast_io_fail_timeout_sec": 0, 00:24:00.092 "disable_auto_failback": false, 00:24:00.092 "generate_uuids": false, 00:24:00.092 "transport_tos": 0, 00:24:00.092 "nvme_error_stat": false, 00:24:00.092 "rdma_srq_size": 0, 00:24:00.092 "io_path_stat": false, 00:24:00.092 "allow_accel_sequence": false, 00:24:00.092 "rdma_max_cq_size": 0, 00:24:00.092 "rdma_cm_event_timeout_ms": 0, 00:24:00.092 "dhchap_digests": [ 00:24:00.092 "sha256", 00:24:00.092 "sha384", 00:24:00.092 "sha512" 00:24:00.092 ], 00:24:00.092 "dhchap_dhgroups": [ 00:24:00.092 "null", 00:24:00.092 "ffdhe2048", 00:24:00.092 "ffdhe3072", 00:24:00.092 "ffdhe4096", 00:24:00.092 "ffdhe6144", 00:24:00.092 "ffdhe8192" 00:24:00.092 ] 00:24:00.092 } 00:24:00.092 }, 00:24:00.092 { 00:24:00.092 "method": "bdev_nvme_set_hotplug", 00:24:00.092 "params": { 00:24:00.092 "period_us": 100000, 00:24:00.092 "enable": false 00:24:00.092 } 00:24:00.092 }, 00:24:00.092 { 00:24:00.092 "method": "bdev_malloc_create", 00:24:00.092 "params": { 00:24:00.092 "name": "malloc0", 00:24:00.092 "num_blocks": 8192, 00:24:00.092 "block_size": 4096, 00:24:00.092 "physical_block_size": 4096, 00:24:00.092 "uuid": "8246d183-299e-41ca-b804-2c60f7392641", 00:24:00.092 "optimal_io_boundary": 0, 00:24:00.092 "md_size": 0, 00:24:00.092 "dif_type": 0, 00:24:00.092 "dif_is_head_of_md": false, 00:24:00.092 "dif_pi_format": 0 00:24:00.092 } 00:24:00.092 }, 00:24:00.092 { 00:24:00.092 "method": "bdev_wait_for_examine" 00:24:00.092 } 00:24:00.092 ] 00:24:00.092 }, 00:24:00.092 { 00:24:00.092 "subsystem": "nbd", 00:24:00.092 "config": [] 00:24:00.092 }, 00:24:00.092 { 00:24:00.092 "subsystem": "scheduler", 00:24:00.092 "config": [ 00:24:00.092 { 00:24:00.092 "method": "framework_set_scheduler", 00:24:00.092 "params": { 00:24:00.092 "name": "static" 00:24:00.092 } 00:24:00.092 } 00:24:00.092 ] 00:24:00.092 }, 00:24:00.092 { 00:24:00.092 "subsystem": "nvmf", 00:24:00.092 "config": [ 00:24:00.092 { 00:24:00.092 "method": "nvmf_set_config", 00:24:00.092 "params": { 00:24:00.092 "discovery_filter": "match_any", 00:24:00.092 "admin_cmd_passthru": { 00:24:00.092 "identify_ctrlr": false 00:24:00.092 } 00:24:00.092 } 00:24:00.092 }, 00:24:00.092 { 00:24:00.092 "method": "nvmf_set_max_subsystems", 00:24:00.092 "params": { 00:24:00.092 "max_subsystems": 1024 00:24:00.092 } 00:24:00.092 }, 00:24:00.092 { 00:24:00.092 "method": "nvmf_set_crdt", 00:24:00.092 "params": { 00:24:00.092 "crdt1": 0, 00:24:00.092 "crdt2": 0, 00:24:00.092 "crdt3": 0 00:24:00.092 } 00:24:00.092 }, 00:24:00.092 { 00:24:00.092 "method": "nvmf_create_transport", 00:24:00.092 "params": { 00:24:00.092 "trtype": "TCP", 00:24:00.092 "max_queue_depth": 128, 00:24:00.092 "max_io_qpairs_per_ctrlr": 127, 00:24:00.092 "in_capsule_data_size": 4096, 00:24:00.092 "max_io_size": 131072, 00:24:00.092 "io_unit_size": 131072, 00:24:00.092 "max_aq_depth": 128, 00:24:00.092 "num_shared_buffers": 511, 00:24:00.092 "buf_cache_size": 4294967295, 00:24:00.092 "dif_insert_or_strip": false, 00:24:00.092 "zcopy": false, 00:24:00.092 "c2h_success": false, 00:24:00.092 "sock_priority": 0, 00:24:00.092 "abort_timeout_sec": 1, 00:24:00.092 "ack_timeout": 0, 00:24:00.092 "data_wr_pool_size": 0 00:24:00.092 } 00:24:00.092 }, 00:24:00.092 { 00:24:00.092 "method": "nvmf_create_subsystem", 00:24:00.092 "params": { 00:24:00.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.092 "allow_any_host": false, 00:24:00.092 "serial_number": "00000000000000000000", 00:24:00.092 "model_number": "SPDK bdev Controller", 00:24:00.092 "max_namespaces": 32, 00:24:00.092 "min_cntlid": 1, 00:24:00.092 "max_cntlid": 65519, 00:24:00.092 "ana_reporting": false 00:24:00.092 } 00:24:00.092 }, 00:24:00.092 { 00:24:00.092 "method": "nvmf_subsystem_add_host", 00:24:00.092 "params": { 00:24:00.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.092 "host": "nqn.2016-06.io.spdk:host1", 00:24:00.092 "psk": "key0" 00:24:00.092 } 00:24:00.092 }, 00:24:00.092 { 00:24:00.092 "method": "nvmf_subsystem_add_ns", 00:24:00.092 "params": { 00:24:00.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.092 "namespace": { 00:24:00.092 "nsid": 1, 00:24:00.092 "bdev_name": "malloc0", 00:24:00.092 "nguid": "8246D183299E41CAB8042C60F7392641", 00:24:00.092 "uuid": "8246d183-299e-41ca-b804-2c60f7392641", 00:24:00.092 "no_auto_visible": false 00:24:00.092 } 00:24:00.092 } 00:24:00.092 }, 00:24:00.092 { 00:24:00.092 "method": "nvmf_subsystem_add_listener", 00:24:00.092 "params": { 00:24:00.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.092 "listen_address": { 00:24:00.092 "trtype": "TCP", 00:24:00.092 "adrfam": "IPv4", 00:24:00.092 "traddr": "10.0.0.2", 00:24:00.092 "trsvcid": "4420" 00:24:00.092 }, 00:24:00.092 "secure_channel": false, 00:24:00.092 "sock_impl": "ssl" 00:24:00.092 } 00:24:00.092 } 00:24:00.092 ] 00:24:00.092 } 00:24:00.092 ] 00:24:00.092 }' 00:24:00.092 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.092 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:00.092 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.092 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=698621 00:24:00.092 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:00.092 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 698621 00:24:00.092 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 698621 ']' 00:24:00.092 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.092 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:00.092 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.093 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:00.093 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.093 [2024-07-26 16:29:19.849194] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:00.093 [2024-07-26 16:29:19.849365] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.351 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.351 [2024-07-26 16:29:19.979215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.611 [2024-07-26 16:29:20.228769] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.611 [2024-07-26 16:29:20.228851] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.611 [2024-07-26 16:29:20.228885] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.611 [2024-07-26 16:29:20.228911] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.611 [2024-07-26 16:29:20.228933] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.611 [2024-07-26 16:29:20.229083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.178 [2024-07-26 16:29:20.753255] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.178 [2024-07-26 16:29:20.785298] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:01.178 [2024-07-26 16:29:20.785630] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.178 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:01.178 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:01.178 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.178 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:01.178 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.178 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.178 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=698706 00:24:01.178 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 698706 /var/tmp/bdevperf.sock 00:24:01.178 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 698706 ']' 00:24:01.178 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.178 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:01.178 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.178 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.178 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:24:01.178 "subsystems": [ 00:24:01.178 { 00:24:01.178 "subsystem": "keyring", 00:24:01.178 "config": [ 00:24:01.178 { 00:24:01.178 "method": "keyring_file_add_key", 00:24:01.178 "params": { 00:24:01.178 "name": "key0", 00:24:01.178 "path": "/tmp/tmp.oDLnr2BK6p" 00:24:01.178 } 00:24:01.178 } 00:24:01.178 ] 00:24:01.178 }, 00:24:01.178 { 00:24:01.178 "subsystem": "iobuf", 00:24:01.178 "config": [ 00:24:01.178 { 00:24:01.178 "method": "iobuf_set_options", 00:24:01.178 "params": { 00:24:01.178 "small_pool_count": 8192, 00:24:01.178 "large_pool_count": 1024, 00:24:01.178 "small_bufsize": 8192, 00:24:01.178 "large_bufsize": 135168 00:24:01.178 } 00:24:01.178 } 00:24:01.178 ] 00:24:01.178 }, 00:24:01.178 { 00:24:01.178 "subsystem": "sock", 00:24:01.178 "config": [ 00:24:01.178 { 00:24:01.178 "method": "sock_set_default_impl", 00:24:01.178 "params": { 00:24:01.178 "impl_name": "posix" 00:24:01.178 } 00:24:01.178 }, 00:24:01.178 { 00:24:01.178 "method": "sock_impl_set_options", 00:24:01.178 "params": { 00:24:01.178 "impl_name": "ssl", 00:24:01.178 "recv_buf_size": 4096, 00:24:01.178 "send_buf_size": 4096, 00:24:01.178 "enable_recv_pipe": true, 00:24:01.178 "enable_quickack": false, 00:24:01.178 "enable_placement_id": 0, 00:24:01.178 "enable_zerocopy_send_server": true, 00:24:01.178 "enable_zerocopy_send_client": false, 00:24:01.178 "zerocopy_threshold": 0, 00:24:01.178 "tls_version": 0, 00:24:01.178 "enable_ktls": false 00:24:01.178 } 00:24:01.178 }, 00:24:01.178 { 00:24:01.178 "method": "sock_impl_set_options", 00:24:01.178 "params": { 00:24:01.178 "impl_name": "posix", 00:24:01.178 "recv_buf_size": 2097152, 00:24:01.178 "send_buf_size": 2097152, 00:24:01.178 "enable_recv_pipe": true, 00:24:01.178 "enable_quickack": false, 00:24:01.178 "enable_placement_id": 0, 00:24:01.178 "enable_zerocopy_send_server": true, 00:24:01.178 "enable_zerocopy_send_client": false, 00:24:01.178 "zerocopy_threshold": 0, 00:24:01.178 "tls_version": 0, 00:24:01.178 "enable_ktls": false 00:24:01.178 } 00:24:01.178 } 00:24:01.178 ] 00:24:01.178 }, 00:24:01.178 { 00:24:01.178 "subsystem": "vmd", 00:24:01.178 "config": [] 00:24:01.178 }, 00:24:01.178 { 00:24:01.178 "subsystem": "accel", 00:24:01.178 "config": [ 00:24:01.178 { 00:24:01.178 "method": "accel_set_options", 00:24:01.178 "params": { 00:24:01.178 "small_cache_size": 128, 00:24:01.178 "large_cache_size": 16, 00:24:01.178 "task_count": 2048, 00:24:01.178 "sequence_count": 2048, 00:24:01.178 "buf_count": 2048 00:24:01.178 } 00:24:01.178 } 00:24:01.178 ] 00:24:01.178 }, 00:24:01.178 { 00:24:01.178 "subsystem": "bdev", 00:24:01.178 "config": [ 00:24:01.178 { 00:24:01.178 "method": "bdev_set_options", 00:24:01.178 "params": { 00:24:01.178 "bdev_io_pool_size": 65535, 00:24:01.178 "bdev_io_cache_size": 256, 00:24:01.178 "bdev_auto_examine": true, 00:24:01.178 "iobuf_small_cache_size": 128, 00:24:01.178 "iobuf_large_cache_size": 16 00:24:01.178 } 00:24:01.178 }, 00:24:01.178 { 00:24:01.178 "method": "bdev_raid_set_options", 00:24:01.178 "params": { 00:24:01.178 "process_window_size_kb": 1024, 00:24:01.178 "process_max_bandwidth_mb_sec": 0 00:24:01.178 } 00:24:01.178 }, 00:24:01.178 { 00:24:01.178 "method": "bdev_iscsi_set_options", 00:24:01.178 "params": { 00:24:01.178 "timeout_sec": 30 00:24:01.178 } 00:24:01.178 }, 00:24:01.178 { 00:24:01.178 "method": "bdev_nvme_set_options", 00:24:01.178 "params": { 00:24:01.178 "action_on_timeout": "none", 00:24:01.178 "timeout_us": 0, 00:24:01.179 "timeout_admin_us": 0, 00:24:01.179 "keep_alive_timeout_ms": 10000, 00:24:01.179 "arbitration_burst": 0, 00:24:01.179 "low_priority_weight": 0, 00:24:01.179 "medium_priority_weight": 0, 00:24:01.179 "high_priority_weight": 0, 00:24:01.179 "nvme_adminq_poll_period_us": 10000, 00:24:01.179 "nvme_ioq_poll_period_us": 0, 00:24:01.179 "io_queue_requests": 512, 00:24:01.179 "delay_cmd_submit": true, 00:24:01.179 "transport_retry_count": 4, 00:24:01.179 "bdev_retry_count": 3, 00:24:01.179 "transport_ack_timeout": 0, 00:24:01.179 "ctrlr_loss_timeout_sec": 0, 00:24:01.179 "reconnect_delay_sec": 0, 00:24:01.179 "fast_io_fail_timeout_sec": 0, 00:24:01.179 "disable_auto_failback": false, 00:24:01.179 "generate_uuids": false, 00:24:01.179 "transport_tos": 0, 00:24:01.179 "nvme_error_stat": false, 00:24:01.179 "rdma_srq_size": 0, 00:24:01.179 "io_path_stat": false, 00:24:01.179 "allow_accel_sequence": false, 00:24:01.179 "rdma_max_cq_size": 0, 00:24:01.179 "rdma_cm_event_timeout_ms": 0, 00:24:01.179 "dhchap_digests": [ 00:24:01.179 "sha256", 00:24:01.179 "sha384", 00:24:01.179 "sha512" 00:24:01.179 ], 00:24:01.179 "dhchap_dhgroups": [ 00:24:01.179 "null", 00:24:01.179 "ffdhe2048", 00:24:01.179 "ffdhe3072", 00:24:01.179 "ffdhe4096", 00:24:01.179 "ffdhe6144", 00:24:01.179 "ffdhe8192" 00:24:01.179 ] 00:24:01.179 } 00:24:01.179 }, 00:24:01.179 { 00:24:01.179 "method": "bdev_nvme_attach_controller", 00:24:01.179 "params": { 00:24:01.179 "name": "nvme0", 00:24:01.179 "trtype": "TCP", 00:24:01.179 "adrfam": "IPv4", 00:24:01.179 "traddr": "10.0.0.2", 00:24:01.179 "trsvcid": "4420", 00:24:01.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.179 "prchk_reftag": false, 00:24:01.179 "prchk_guard": false, 00:24:01.179 "ctrlr_loss_timeout_sec": 0, 00:24:01.179 "reconnect_delay_sec": 0, 00:24:01.179 "fast_io_fail_timeout_sec": 0, 00:24:01.179 "psk": "key0", 00:24:01.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.179 "hdgst": false, 00:24:01.179 "ddgst": false 00:24:01.179 } 00:24:01.179 }, 00:24:01.179 { 00:24:01.179 "method": "bdev_nvme_set_hotplug", 00:24:01.179 "params": { 00:24:01.179 "period_us": 100000, 00:24:01.179 "enable": false 00:24:01.179 } 00:24:01.179 }, 00:24:01.179 { 00:24:01.179 "method": "bdev_enable_histogram", 00:24:01.179 "params": { 00:24:01.179 "name": "nvme0n1", 00:24:01.179 "enable": true 00:24:01.179 } 00:24:01.179 }, 00:24:01.179 { 00:24:01.179 "method": "bdev_wait_for_examine" 00:24:01.179 } 00:24:01.179 ] 00:24:01.179 }, 00:24:01.179 { 00:24:01.179 "subsystem": "nbd", 00:24:01.179 "config": [] 00:24:01.179 } 00:24:01.179 ] 00:24:01.179 }' 00:24:01.179 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.179 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.179 [2024-07-26 16:29:20.914799] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:01.179 [2024-07-26 16:29:20.914954] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698706 ] 00:24:01.439 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.439 [2024-07-26 16:29:21.043464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.699 [2024-07-26 16:29:21.303615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.997 [2024-07-26 16:29:21.704221] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:02.259 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:02.259 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:02.259 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:02.259 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:24:02.519 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.519 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.519 Running I/O for 1 seconds... 00:24:03.899 00:24:03.899 Latency(us) 00:24:03.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.899 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:03.899 Verification LBA range: start 0x0 length 0x2000 00:24:03.899 nvme0n1 : 1.05 2493.11 9.74 0.00 0.00 50261.99 8980.86 73400.32 00:24:03.899 =================================================================================================================== 00:24:03.899 Total : 2493.11 9.74 0.00 0.00 50261.99 8980.86 73400.32 00:24:03.899 0 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:03.899 nvmf_trace.0 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 698706 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 698706 ']' 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 698706 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 698706 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 698706' 00:24:03.899 killing process with pid 698706 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 698706 00:24:03.899 Received shutdown signal, test time was about 1.000000 seconds 00:24:03.899 00:24:03.899 Latency(us) 00:24:03.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.899 =================================================================================================================== 00:24:03.899 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.899 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 698706 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:04.836 rmmod nvme_tcp 00:24:04.836 rmmod nvme_fabrics 00:24:04.836 rmmod nvme_keyring 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 698621 ']' 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 698621 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 698621 ']' 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 698621 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 698621 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 698621' 00:24:04.836 killing process with pid 698621 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 698621 00:24:04.836 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 698621 00:24:06.240 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:06.240 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:06.240 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:06.240 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:06.240 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:06.240 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.240 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.240 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.145 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:08.403 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.mZYBWbXbzO /tmp/tmp.ba2z26N2kf /tmp/tmp.oDLnr2BK6p 00:24:08.403 00:24:08.403 real 1m49.893s 00:24:08.403 user 2m59.878s 00:24:08.403 sys 0m26.353s 00:24:08.403 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:08.403 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.403 ************************************ 00:24:08.403 END TEST nvmf_tls 00:24:08.403 ************************************ 00:24:08.403 16:29:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:08.403 16:29:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:08.403 16:29:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:08.403 16:29:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:08.403 ************************************ 00:24:08.403 START TEST nvmf_fips 00:24:08.403 ************************************ 00:24:08.403 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:08.403 * Looking for test storage... 00:24:08.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:08.404 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:08.405 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:08.663 Error setting digest 00:24:08.663 00222CDD0B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:08.663 00222CDD0B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:08.663 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:10.565 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:10.565 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.565 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:10.566 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:10.566 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:10.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:24:10.566 00:24:10.566 --- 10.0.0.2 ping statistics --- 00:24:10.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.566 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:24:10.566 00:24:10.566 --- 10.0.0.1 ping statistics --- 00:24:10.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.566 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:10.566 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:10.824 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:10.824 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.824 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:10.824 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:10.824 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=701281 00:24:10.824 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:10.824 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 701281 00:24:10.824 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 701281 ']' 00:24:10.824 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.824 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.824 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.824 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.824 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:10.824 [2024-07-26 16:29:30.468783] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:10.824 [2024-07-26 16:29:30.468927] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.824 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.082 [2024-07-26 16:29:30.610303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.341 [2024-07-26 16:29:30.864049] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.341 [2024-07-26 16:29:30.864133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.341 [2024-07-26 16:29:30.864161] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.341 [2024-07-26 16:29:30.864187] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.341 [2024-07-26 16:29:30.864208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.341 [2024-07-26 16:29:30.864263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.909 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:11.909 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:11.909 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:11.909 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:11.909 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:11.909 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.909 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:11.909 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:11.909 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:11.909 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:11.909 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:11.909 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:11.909 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:11.909 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:11.909 [2024-07-26 16:29:31.642510] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.909 [2024-07-26 16:29:31.658458] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:11.909 [2024-07-26 16:29:31.658783] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.169 [2024-07-26 16:29:31.732742] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:12.169 malloc0 00:24:12.169 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.169 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=701437 00:24:12.169 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:12.169 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 701437 /var/tmp/bdevperf.sock 00:24:12.169 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 701437 ']' 00:24:12.169 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.169 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.169 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.169 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.169 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.169 [2024-07-26 16:29:31.886198] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:12.169 [2024-07-26 16:29:31.886343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701437 ] 00:24:12.427 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.427 [2024-07-26 16:29:32.019867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.687 [2024-07-26 16:29:32.248155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.254 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:13.254 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:13.254 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:13.512 [2024-07-26 16:29:33.114287] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:13.512 [2024-07-26 16:29:33.114482] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:13.512 TLSTESTn1 00:24:13.513 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:13.773 Running I/O for 10 seconds... 00:24:23.765 00:24:23.765 Latency(us) 00:24:23.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.765 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:23.765 Verification LBA range: start 0x0 length 0x2000 00:24:23.765 TLSTESTn1 : 10.04 1825.44 7.13 0.00 0.00 69969.07 12136.30 74565.40 00:24:23.765 =================================================================================================================== 00:24:23.765 Total : 1825.44 7.13 0.00 0.00 69969.07 12136.30 74565.40 00:24:23.765 0 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:23.765 nvmf_trace.0 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 701437 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 701437 ']' 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 701437 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:23.765 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 701437 00:24:24.070 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:24.070 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:24.070 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 701437' 00:24:24.070 killing process with pid 701437 00:24:24.070 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 701437 00:24:24.070 Received shutdown signal, test time was about 10.000000 seconds 00:24:24.070 00:24:24.070 Latency(us) 00:24:24.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.070 =================================================================================================================== 00:24:24.070 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:24.070 [2024-07-26 16:29:43.532730] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:24.070 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 701437 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:25.009 rmmod nvme_tcp 00:24:25.009 rmmod nvme_fabrics 00:24:25.009 rmmod nvme_keyring 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 701281 ']' 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 701281 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 701281 ']' 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 701281 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 701281 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 701281' 00:24:25.009 killing process with pid 701281 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 701281 00:24:25.009 [2024-07-26 16:29:44.592322] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:25.009 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 701281 00:24:26.393 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:26.393 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:26.393 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:26.393 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.393 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:26.393 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.393 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.393 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.301 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:28.301 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:28.301 00:24:28.301 real 0m20.062s 00:24:28.301 user 0m23.776s 00:24:28.301 sys 0m6.977s 00:24:28.301 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:28.301 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:28.301 ************************************ 00:24:28.301 END TEST nvmf_fips 00:24:28.301 ************************************ 00:24:28.301 16:29:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:24:28.301 16:29:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:28.301 16:29:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:28.301 16:29:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:28.301 16:29:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:28.559 ************************************ 00:24:28.559 START TEST nvmf_fuzz 00:24:28.559 ************************************ 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:28.559 * Looking for test storage... 00:24:28.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.559 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:28.560 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:30.463 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:30.463 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.463 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:30.463 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:30.464 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:30.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:24:30.464 00:24:30.464 --- 10.0.0.2 ping statistics --- 00:24:30.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.464 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:24:30.464 00:24:30.464 --- 10.0.0.1 ping statistics --- 00:24:30.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.464 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:30.464 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:30.722 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=704958 00:24:30.722 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:30.722 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 704958 00:24:30.722 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 704958 ']' 00:24:30.722 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.722 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:30.722 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.722 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.722 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.722 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:31.661 Malloc0 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:31.661 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:03.744 Fuzzing completed. Shutting down the fuzz application 00:25:03.744 00:25:03.744 Dumping successful admin opcodes: 00:25:03.744 8, 9, 10, 24, 00:25:03.744 Dumping successful io opcodes: 00:25:03.744 0, 9, 00:25:03.744 NS: 0x200003aefec0 I/O qp, Total commands completed: 312167, total successful commands: 1844, random_seed: 3583305216 00:25:03.744 NS: 0x200003aefec0 admin qp, Total commands completed: 39328, total successful commands: 320, random_seed: 2053837568 00:25:03.744 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:05.125 Fuzzing completed. Shutting down the fuzz application 00:25:05.125 00:25:05.125 Dumping successful admin opcodes: 00:25:05.125 24, 00:25:05.125 Dumping successful io opcodes: 00:25:05.125 00:25:05.125 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2963347764 00:25:05.125 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2963572098 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:05.125 rmmod nvme_tcp 00:25:05.125 rmmod nvme_fabrics 00:25:05.125 rmmod nvme_keyring 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 704958 ']' 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 704958 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 704958 ']' 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 704958 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 704958 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 704958' 00:25:05.125 killing process with pid 704958 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 704958 00:25:05.125 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 704958 00:25:06.504 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:06.504 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:06.504 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:06.504 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:06.504 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:06.504 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.504 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.504 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:09.069 00:25:09.069 real 0m40.235s 00:25:09.069 user 0m57.648s 00:25:09.069 sys 0m13.618s 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:09.069 ************************************ 00:25:09.069 END TEST nvmf_fuzz 00:25:09.069 ************************************ 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:09.069 ************************************ 00:25:09.069 START TEST nvmf_multiconnection 00:25:09.069 ************************************ 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:09.069 * Looking for test storage... 00:25:09.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.069 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:25:09.070 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:10.975 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:10.975 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:10.975 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.975 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:10.976 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:10.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:25:10.976 00:25:10.976 --- 10.0.0.2 ping statistics --- 00:25:10.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.976 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:25:10.976 00:25:10.976 --- 10.0.0.1 ping statistics --- 00:25:10.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.976 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=711581 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 711581 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 711581 ']' 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:10.976 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:10.976 [2024-07-26 16:30:30.568212] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:10.976 [2024-07-26 16:30:30.568390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.976 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.976 [2024-07-26 16:30:30.710788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:11.236 [2024-07-26 16:30:30.972748] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.236 [2024-07-26 16:30:30.972829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.236 [2024-07-26 16:30:30.972857] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.236 [2024-07-26 16:30:30.972879] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.236 [2024-07-26 16:30:30.972901] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.236 [2024-07-26 16:30:30.973033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.236 [2024-07-26 16:30:30.973115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.236 [2024-07-26 16:30:30.973138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.236 [2024-07-26 16:30:30.973148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.803 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:11.803 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:25:11.803 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:11.803 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:11.803 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.803 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.803 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:11.803 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.803 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.803 [2024-07-26 16:30:31.546014] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.803 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.803 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:11.803 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.803 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:11.803 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.803 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.061 Malloc1 00:25:12.061 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.061 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:12.061 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.061 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.061 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.061 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:12.061 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.061 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.061 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.061 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.061 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.061 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.061 [2024-07-26 16:30:31.655708] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.061 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.061 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.061 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.062 Malloc2 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.062 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.320 Malloc3 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.320 Malloc4 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.320 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.320 Malloc5 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.320 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.578 Malloc6 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.578 Malloc7 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.578 Malloc8 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.578 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.836 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.836 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:12.836 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.836 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.836 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.836 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.836 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:12.836 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.836 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.836 Malloc9 00:25:12.836 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.836 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:12.836 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.837 Malloc10 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.837 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.097 Malloc11 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.097 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:13.665 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:13.665 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:13.665 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:13.665 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:13.665 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:16.195 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:16.195 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:16.195 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:16.195 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:16.195 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:16.195 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:16.195 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.195 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:16.452 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:16.453 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:16.453 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:16.453 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:16.453 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:18.356 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:18.356 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:18.356 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:18.356 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:18.356 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:18.356 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:18.356 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.356 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:18.924 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:18.924 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:18.924 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.925 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:18.925 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:21.459 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:21.459 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:21.459 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:21.459 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:21.459 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:21.459 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:21.459 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.459 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:21.717 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:21.717 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.717 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:21.717 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:21.717 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:24.251 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:24.251 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:24.251 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:24.251 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:24.251 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:24.251 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:24.251 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.251 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:24.510 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:24.510 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:24.510 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:24.510 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:24.510 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:26.439 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:26.439 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:26.439 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:26.439 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:26.439 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:26.439 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:26.439 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.439 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:27.373 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:27.373 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:27.373 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:27.373 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:27.373 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:29.280 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:29.280 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:29.280 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:29.280 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:29.280 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:29.280 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:29.280 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.280 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:30.219 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:30.219 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:30.219 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:30.219 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:30.219 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:32.119 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:32.119 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:32.119 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:32.119 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:32.119 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:32.119 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:32.119 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.119 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:33.052 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:33.052 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:33.052 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:33.052 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:33.052 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:34.954 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:34.954 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:34.954 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:34.954 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:34.954 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.954 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:34.954 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.954 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:35.890 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:35.890 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:35.890 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:35.890 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:35.890 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:37.791 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:37.791 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:37.791 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:37.792 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:37.792 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:37.792 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:37.792 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.792 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:38.723 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:38.724 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:38.724 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:38.724 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:38.724 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:41.257 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:41.257 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:41.257 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:41.257 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:41.257 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:41.257 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:41.257 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.257 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:41.826 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:41.826 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:41.826 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.826 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:41.826 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:43.767 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:43.767 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:43.767 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:43.767 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:43.767 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:43.767 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:43.767 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:43.767 [global] 00:25:43.767 thread=1 00:25:43.767 invalidate=1 00:25:43.767 rw=read 00:25:43.767 time_based=1 00:25:43.767 runtime=10 00:25:43.767 ioengine=libaio 00:25:43.767 direct=1 00:25:43.767 bs=262144 00:25:43.767 iodepth=64 00:25:43.767 norandommap=1 00:25:43.767 numjobs=1 00:25:43.767 00:25:43.767 [job0] 00:25:43.767 filename=/dev/nvme0n1 00:25:43.767 [job1] 00:25:43.767 filename=/dev/nvme10n1 00:25:43.767 [job2] 00:25:43.767 filename=/dev/nvme1n1 00:25:43.767 [job3] 00:25:43.767 filename=/dev/nvme2n1 00:25:43.767 [job4] 00:25:43.767 filename=/dev/nvme3n1 00:25:43.767 [job5] 00:25:43.767 filename=/dev/nvme4n1 00:25:43.767 [job6] 00:25:43.767 filename=/dev/nvme5n1 00:25:43.767 [job7] 00:25:43.767 filename=/dev/nvme6n1 00:25:43.767 [job8] 00:25:43.767 filename=/dev/nvme7n1 00:25:43.767 [job9] 00:25:43.767 filename=/dev/nvme8n1 00:25:43.767 [job10] 00:25:43.767 filename=/dev/nvme9n1 00:25:43.767 Could not set queue depth (nvme0n1) 00:25:43.767 Could not set queue depth (nvme10n1) 00:25:43.767 Could not set queue depth (nvme1n1) 00:25:43.767 Could not set queue depth (nvme2n1) 00:25:43.767 Could not set queue depth (nvme3n1) 00:25:43.767 Could not set queue depth (nvme4n1) 00:25:43.767 Could not set queue depth (nvme5n1) 00:25:43.767 Could not set queue depth (nvme6n1) 00:25:43.767 Could not set queue depth (nvme7n1) 00:25:43.767 Could not set queue depth (nvme8n1) 00:25:43.767 Could not set queue depth (nvme9n1) 00:25:44.025 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.025 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.025 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.025 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.025 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.025 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.025 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.025 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.025 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.025 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.025 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:44.025 fio-3.35 00:25:44.025 Starting 11 threads 00:25:56.234 00:25:56.234 job0: (groupid=0, jobs=1): err= 0: pid=716085: Fri Jul 26 16:31:14 2024 00:25:56.234 read: IOPS=578, BW=145MiB/s (152MB/s)(1471MiB/10159msec) 00:25:56.234 slat (usec): min=10, max=112309, avg=958.91, stdev=5028.97 00:25:56.234 clat (usec): min=1249, max=369398, avg=109486.27, stdev=67488.18 00:25:56.234 lat (usec): min=1272, max=369435, avg=110445.18, stdev=68075.85 00:25:56.234 clat percentiles (msec): 00:25:56.234 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 22], 20.00th=[ 44], 00:25:56.234 | 30.00th=[ 69], 40.00th=[ 81], 50.00th=[ 110], 60.00th=[ 124], 00:25:56.234 | 70.00th=[ 142], 80.00th=[ 161], 90.00th=[ 211], 95.00th=[ 234], 00:25:56.234 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 355], 99.95th=[ 355], 00:25:56.234 | 99.99th=[ 372] 00:25:56.234 bw ( KiB/s): min=58368, max=340480, per=10.04%, avg=148923.45, stdev=65886.15, samples=20 00:25:56.234 iops : min= 228, max= 1330, avg=581.70, stdev=257.36, samples=20 00:25:56.234 lat (msec) : 2=0.03%, 4=0.66%, 10=2.87%, 20=5.34%, 50=13.43% 00:25:56.234 lat (msec) : 100=24.62%, 250=49.98%, 500=3.06% 00:25:56.234 cpu : usr=0.26%, sys=1.55%, ctx=1251, majf=0, minf=4097 00:25:56.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:56.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.234 issued rwts: total=5882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.234 job1: (groupid=0, jobs=1): err= 0: pid=716086: Fri Jul 26 16:31:14 2024 00:25:56.234 read: IOPS=809, BW=202MiB/s (212MB/s)(2056MiB/10155msec) 00:25:56.234 slat (usec): min=14, max=151789, avg=1095.24, stdev=4169.23 00:25:56.234 clat (usec): min=1594, max=359258, avg=77881.77, stdev=43405.32 00:25:56.234 lat (usec): min=1624, max=359276, avg=78977.01, stdev=43917.89 00:25:56.234 clat percentiles (msec): 00:25:56.234 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 34], 20.00th=[ 39], 00:25:56.234 | 30.00th=[ 49], 40.00th=[ 66], 50.00th=[ 78], 60.00th=[ 87], 00:25:56.234 | 70.00th=[ 99], 80.00th=[ 109], 90.00th=[ 122], 95.00th=[ 142], 00:25:56.234 | 99.00th=[ 245], 99.50th=[ 259], 99.90th=[ 347], 99.95th=[ 347], 00:25:56.234 | 99.99th=[ 359] 00:25:56.234 bw ( KiB/s): min=68096, max=425472, per=14.08%, avg=208815.15, stdev=81398.02, samples=20 00:25:56.234 iops : min= 266, max= 1662, avg=815.60, stdev=317.99, samples=20 00:25:56.234 lat (msec) : 2=0.01%, 4=0.34%, 10=2.18%, 20=2.77%, 50=25.62% 00:25:56.234 lat (msec) : 100=40.80%, 250=27.59%, 500=0.68% 00:25:56.234 cpu : usr=0.44%, sys=2.73%, ctx=1524, majf=0, minf=4097 00:25:56.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:56.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.234 issued rwts: total=8223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.234 job2: (groupid=0, jobs=1): err= 0: pid=716087: Fri Jul 26 16:31:14 2024 00:25:56.234 read: IOPS=532, BW=133MiB/s (140MB/s)(1357MiB/10194msec) 00:25:56.234 slat (usec): min=11, max=129339, avg=1735.57, stdev=6207.59 00:25:56.234 clat (msec): min=41, max=395, avg=118.38, stdev=64.01 00:25:56.234 lat (msec): min=41, max=405, avg=120.12, stdev=64.94 00:25:56.234 clat percentiles (msec): 00:25:56.234 | 1.00th=[ 48], 5.00th=[ 52], 10.00th=[ 57], 20.00th=[ 68], 00:25:56.234 | 30.00th=[ 80], 40.00th=[ 90], 50.00th=[ 100], 60.00th=[ 110], 00:25:56.234 | 70.00th=[ 125], 80.00th=[ 161], 90.00th=[ 226], 95.00th=[ 253], 00:25:56.234 | 99.00th=[ 321], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 388], 00:25:56.234 | 99.99th=[ 397] 00:25:56.234 bw ( KiB/s): min=58880, max=234027, per=9.25%, avg=137243.55, stdev=58799.08, samples=20 00:25:56.234 iops : min= 230, max= 914, avg=536.05, stdev=229.67, samples=20 00:25:56.234 lat (msec) : 50=3.94%, 100=47.12%, 250=43.89%, 500=5.05% 00:25:56.234 cpu : usr=0.27%, sys=1.81%, ctx=978, majf=0, minf=4097 00:25:56.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:56.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.234 issued rwts: total=5427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.234 job3: (groupid=0, jobs=1): err= 0: pid=716088: Fri Jul 26 16:31:14 2024 00:25:56.234 read: IOPS=457, BW=114MiB/s (120MB/s)(1160MiB/10135msec) 00:25:56.234 slat (usec): min=10, max=234701, avg=1571.76, stdev=7173.76 00:25:56.234 clat (usec): min=1742, max=567589, avg=138176.09, stdev=70250.13 00:25:56.234 lat (usec): min=1764, max=567606, avg=139747.86, stdev=71049.32 00:25:56.234 clat percentiles (msec): 00:25:56.234 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 45], 20.00th=[ 79], 00:25:56.234 | 30.00th=[ 109], 40.00th=[ 125], 50.00th=[ 140], 60.00th=[ 153], 00:25:56.234 | 70.00th=[ 163], 80.00th=[ 184], 90.00th=[ 226], 95.00th=[ 259], 00:25:56.234 | 99.00th=[ 384], 99.50th=[ 393], 99.90th=[ 426], 99.95th=[ 426], 00:25:56.234 | 99.99th=[ 567] 00:25:56.234 bw ( KiB/s): min=59392, max=276950, per=7.89%, avg=117085.50, stdev=44926.03, samples=20 00:25:56.234 iops : min= 232, max= 1081, avg=457.30, stdev=175.34, samples=20 00:25:56.234 lat (msec) : 2=0.04%, 4=0.39%, 10=1.10%, 20=2.01%, 50=9.68% 00:25:56.234 lat (msec) : 100=12.48%, 250=68.48%, 500=5.78%, 750=0.04% 00:25:56.234 cpu : usr=0.23%, sys=1.45%, ctx=929, majf=0, minf=4097 00:25:56.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:56.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.234 issued rwts: total=4638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.234 job4: (groupid=0, jobs=1): err= 0: pid=716089: Fri Jul 26 16:31:14 2024 00:25:56.234 read: IOPS=412, BW=103MiB/s (108MB/s)(1048MiB/10153msec) 00:25:56.234 slat (usec): min=11, max=83538, avg=1773.52, stdev=5849.17 00:25:56.234 clat (msec): min=9, max=360, avg=153.18, stdev=41.13 00:25:56.234 lat (msec): min=9, max=366, avg=154.95, stdev=41.85 00:25:56.234 clat percentiles (msec): 00:25:56.234 | 1.00th=[ 31], 5.00th=[ 107], 10.00th=[ 118], 20.00th=[ 130], 00:25:56.234 | 30.00th=[ 138], 40.00th=[ 142], 50.00th=[ 148], 60.00th=[ 155], 00:25:56.234 | 70.00th=[ 165], 80.00th=[ 176], 90.00th=[ 199], 95.00th=[ 220], 00:25:56.234 | 99.00th=[ 313], 99.50th=[ 330], 99.90th=[ 347], 99.95th=[ 347], 00:25:56.234 | 99.99th=[ 359] 00:25:56.234 bw ( KiB/s): min=56320, max=136704, per=7.12%, avg=105599.25, stdev=19691.85, samples=20 00:25:56.234 iops : min= 220, max= 534, avg=412.45, stdev=76.97, samples=20 00:25:56.234 lat (msec) : 10=0.02%, 20=0.36%, 50=1.69%, 100=1.98%, 250=93.20% 00:25:56.234 lat (msec) : 500=2.74% 00:25:56.234 cpu : usr=0.30%, sys=1.49%, ctx=960, majf=0, minf=4097 00:25:56.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:56.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.234 issued rwts: total=4190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.234 job5: (groupid=0, jobs=1): err= 0: pid=716090: Fri Jul 26 16:31:14 2024 00:25:56.234 read: IOPS=439, BW=110MiB/s (115MB/s)(1113MiB/10135msec) 00:25:56.234 slat (usec): min=13, max=64701, avg=2186.84, stdev=5883.92 00:25:56.234 clat (msec): min=45, max=294, avg=143.44, stdev=30.41 00:25:56.234 lat (msec): min=45, max=294, avg=145.62, stdev=30.92 00:25:56.234 clat percentiles (msec): 00:25:56.234 | 1.00th=[ 80], 5.00th=[ 106], 10.00th=[ 112], 20.00th=[ 121], 00:25:56.234 | 30.00th=[ 128], 40.00th=[ 133], 50.00th=[ 138], 60.00th=[ 144], 00:25:56.234 | 70.00th=[ 150], 80.00th=[ 165], 90.00th=[ 186], 95.00th=[ 207], 00:25:56.235 | 99.00th=[ 232], 99.50th=[ 247], 99.90th=[ 264], 99.95th=[ 296], 00:25:56.235 | 99.99th=[ 296] 00:25:56.235 bw ( KiB/s): min=72047, max=133365, per=7.57%, avg=112289.30, stdev=16527.15, samples=20 00:25:56.235 iops : min= 281, max= 520, avg=438.55, stdev=64.55, samples=20 00:25:56.235 lat (msec) : 50=0.49%, 100=1.91%, 250=97.28%, 500=0.31% 00:25:56.235 cpu : usr=0.26%, sys=1.63%, ctx=863, majf=0, minf=4097 00:25:56.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:56.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.235 issued rwts: total=4451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.235 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.235 job6: (groupid=0, jobs=1): err= 0: pid=716091: Fri Jul 26 16:31:14 2024 00:25:56.235 read: IOPS=436, BW=109MiB/s (114MB/s)(1108MiB/10153msec) 00:25:56.235 slat (usec): min=13, max=124937, avg=1990.52, stdev=6209.63 00:25:56.235 clat (msec): min=8, max=420, avg=144.46, stdev=58.34 00:25:56.235 lat (msec): min=8, max=420, avg=146.45, stdev=59.25 00:25:56.235 clat percentiles (msec): 00:25:56.235 | 1.00th=[ 25], 5.00th=[ 75], 10.00th=[ 87], 20.00th=[ 106], 00:25:56.235 | 30.00th=[ 114], 40.00th=[ 124], 50.00th=[ 134], 60.00th=[ 144], 00:25:56.235 | 70.00th=[ 155], 80.00th=[ 176], 90.00th=[ 228], 95.00th=[ 264], 00:25:56.235 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 414], 99.95th=[ 418], 00:25:56.235 | 99.99th=[ 422] 00:25:56.235 bw ( KiB/s): min=53248, max=171688, per=7.54%, avg=111828.90, stdev=32662.86, samples=20 00:25:56.235 iops : min= 208, max= 670, avg=436.75, stdev=127.47, samples=20 00:25:56.235 lat (msec) : 10=0.11%, 20=0.65%, 50=1.78%, 100=14.17%, 250=77.24% 00:25:56.235 lat (msec) : 500=6.05% 00:25:56.235 cpu : usr=0.23%, sys=1.53%, ctx=889, majf=0, minf=4097 00:25:56.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:56.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.235 issued rwts: total=4433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.235 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.235 job7: (groupid=0, jobs=1): err= 0: pid=716092: Fri Jul 26 16:31:14 2024 00:25:56.235 read: IOPS=455, BW=114MiB/s (119MB/s)(1154MiB/10138msec) 00:25:56.235 slat (usec): min=14, max=68484, avg=2046.64, stdev=5638.46 00:25:56.235 clat (msec): min=23, max=306, avg=138.37, stdev=31.16 00:25:56.235 lat (msec): min=23, max=306, avg=140.42, stdev=31.43 00:25:56.235 clat percentiles (msec): 00:25:56.235 | 1.00th=[ 74], 5.00th=[ 99], 10.00th=[ 106], 20.00th=[ 114], 00:25:56.235 | 30.00th=[ 122], 40.00th=[ 130], 50.00th=[ 138], 60.00th=[ 144], 00:25:56.235 | 70.00th=[ 150], 80.00th=[ 159], 90.00th=[ 174], 95.00th=[ 190], 00:25:56.235 | 99.00th=[ 239], 99.50th=[ 262], 99.90th=[ 296], 99.95th=[ 296], 00:25:56.235 | 99.99th=[ 309] 00:25:56.235 bw ( KiB/s): min=82432, max=162491, per=7.86%, avg=116513.60, stdev=18905.90, samples=20 00:25:56.235 iops : min= 322, max= 634, avg=455.00, stdev=73.66, samples=20 00:25:56.235 lat (msec) : 50=0.43%, 100=5.41%, 250=93.20%, 500=0.95% 00:25:56.235 cpu : usr=0.20%, sys=1.65%, ctx=907, majf=0, minf=4097 00:25:56.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:56.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.235 issued rwts: total=4617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.235 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.235 job8: (groupid=0, jobs=1): err= 0: pid=716093: Fri Jul 26 16:31:14 2024 00:25:56.235 read: IOPS=471, BW=118MiB/s (124MB/s)(1195MiB/10137msec) 00:25:56.235 slat (usec): min=10, max=64289, avg=1782.57, stdev=5231.30 00:25:56.235 clat (msec): min=3, max=265, avg=133.90, stdev=39.18 00:25:56.235 lat (msec): min=3, max=265, avg=135.68, stdev=39.84 00:25:56.235 clat percentiles (msec): 00:25:56.235 | 1.00th=[ 8], 5.00th=[ 56], 10.00th=[ 95], 20.00th=[ 113], 00:25:56.235 | 30.00th=[ 122], 40.00th=[ 130], 50.00th=[ 136], 60.00th=[ 144], 00:25:56.235 | 70.00th=[ 150], 80.00th=[ 159], 90.00th=[ 178], 95.00th=[ 197], 00:25:56.235 | 99.00th=[ 220], 99.50th=[ 234], 99.90th=[ 266], 99.95th=[ 266], 00:25:56.235 | 99.99th=[ 266] 00:25:56.235 bw ( KiB/s): min=81245, max=167424, per=8.13%, avg=120657.05, stdev=22960.72, samples=20 00:25:56.235 iops : min= 317, max= 654, avg=471.20, stdev=89.69, samples=20 00:25:56.235 lat (msec) : 4=0.06%, 10=1.42%, 20=1.30%, 50=1.86%, 100=6.76% 00:25:56.235 lat (msec) : 250=88.36%, 500=0.23% 00:25:56.235 cpu : usr=0.19%, sys=1.70%, ctx=994, majf=0, minf=4097 00:25:56.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:56.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.235 issued rwts: total=4778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.235 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.235 job9: (groupid=0, jobs=1): err= 0: pid=716094: Fri Jul 26 16:31:14 2024 00:25:56.235 read: IOPS=679, BW=170MiB/s (178MB/s)(1725MiB/10154msec) 00:25:56.235 slat (usec): min=10, max=95022, avg=1277.91, stdev=4725.28 00:25:56.235 clat (usec): min=1902, max=383633, avg=92835.53, stdev=55623.41 00:25:56.235 lat (usec): min=1929, max=400494, avg=94113.44, stdev=56397.34 00:25:56.235 clat percentiles (msec): 00:25:56.235 | 1.00th=[ 23], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 49], 00:25:56.235 | 30.00th=[ 55], 40.00th=[ 69], 50.00th=[ 80], 60.00th=[ 91], 00:25:56.235 | 70.00th=[ 104], 80.00th=[ 122], 90.00th=[ 167], 95.00th=[ 215], 00:25:56.235 | 99.00th=[ 305], 99.50th=[ 330], 99.90th=[ 368], 99.95th=[ 380], 00:25:56.235 | 99.99th=[ 384] 00:25:56.235 bw ( KiB/s): min=61440, max=336896, per=11.80%, avg=174982.35, stdev=80371.13, samples=20 00:25:56.235 iops : min= 240, max= 1316, avg=683.45, stdev=313.97, samples=20 00:25:56.235 lat (msec) : 2=0.03%, 4=0.12%, 10=0.14%, 20=0.58%, 50=23.14% 00:25:56.235 lat (msec) : 100=43.74%, 250=30.12%, 500=2.13% 00:25:56.235 cpu : usr=0.39%, sys=2.29%, ctx=1286, majf=0, minf=3722 00:25:56.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:56.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.235 issued rwts: total=6900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.235 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.235 job10: (groupid=0, jobs=1): err= 0: pid=716095: Fri Jul 26 16:31:14 2024 00:25:56.235 read: IOPS=544, BW=136MiB/s (143MB/s)(1381MiB/10154msec) 00:25:56.235 slat (usec): min=9, max=136883, avg=832.36, stdev=4477.89 00:25:56.235 clat (usec): min=1352, max=328493, avg=116704.67, stdev=71416.61 00:25:56.235 lat (usec): min=1398, max=328576, avg=117537.03, stdev=72250.26 00:25:56.235 clat percentiles (msec): 00:25:56.235 | 1.00th=[ 7], 5.00th=[ 14], 10.00th=[ 24], 20.00th=[ 44], 00:25:56.235 | 30.00th=[ 71], 40.00th=[ 92], 50.00th=[ 112], 60.00th=[ 134], 00:25:56.235 | 70.00th=[ 155], 80.00th=[ 178], 90.00th=[ 224], 95.00th=[ 243], 00:25:56.235 | 99.00th=[ 284], 99.50th=[ 292], 99.90th=[ 309], 99.95th=[ 321], 00:25:56.235 | 99.99th=[ 330] 00:25:56.235 bw ( KiB/s): min=62976, max=246784, per=9.42%, avg=139750.65, stdev=53332.67, samples=20 00:25:56.235 iops : min= 246, max= 964, avg=545.85, stdev=208.31, samples=20 00:25:56.235 lat (msec) : 2=0.05%, 4=0.33%, 10=2.66%, 20=5.38%, 50=14.52% 00:25:56.235 lat (msec) : 100=20.51%, 250=52.83%, 500=3.73% 00:25:56.235 cpu : usr=0.33%, sys=1.57%, ctx=1293, majf=0, minf=4097 00:25:56.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:56.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.235 issued rwts: total=5525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.235 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.235 00:25:56.235 Run status group 0 (all jobs): 00:25:56.235 READ: bw=1448MiB/s (1519MB/s), 103MiB/s-202MiB/s (108MB/s-212MB/s), io=14.4GiB (15.5GB), run=10135-10194msec 00:25:56.235 00:25:56.235 Disk stats (read/write): 00:25:56.235 nvme0n1: ios=11590/0, merge=0/0, ticks=1238222/0, in_queue=1238222, util=96.95% 00:25:56.235 nvme10n1: ios=16248/0, merge=0/0, ticks=1227062/0, in_queue=1227062, util=97.18% 00:25:56.235 nvme1n1: ios=10852/0, merge=0/0, ticks=1264238/0, in_queue=1264238, util=97.54% 00:25:56.235 nvme2n1: ios=8972/0, merge=0/0, ticks=1215889/0, in_queue=1215889, util=97.64% 00:25:56.235 nvme3n1: ios=8186/0, merge=0/0, ticks=1230645/0, in_queue=1230645, util=97.72% 00:25:56.235 nvme4n1: ios=8721/0, merge=0/0, ticks=1230246/0, in_queue=1230246, util=98.09% 00:25:56.235 nvme5n1: ios=8692/0, merge=0/0, ticks=1226129/0, in_queue=1226129, util=98.27% 00:25:56.235 nvme6n1: ios=9021/0, merge=0/0, ticks=1225984/0, in_queue=1225984, util=98.40% 00:25:56.235 nvme7n1: ios=9339/0, merge=0/0, ticks=1230975/0, in_queue=1230975, util=98.86% 00:25:56.235 nvme8n1: ios=13646/0, merge=0/0, ticks=1231069/0, in_queue=1231069, util=99.08% 00:25:56.235 nvme9n1: ios=10808/0, merge=0/0, ticks=1236572/0, in_queue=1236572, util=99.21% 00:25:56.235 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:56.235 [global] 00:25:56.235 thread=1 00:25:56.235 invalidate=1 00:25:56.235 rw=randwrite 00:25:56.235 time_based=1 00:25:56.235 runtime=10 00:25:56.235 ioengine=libaio 00:25:56.235 direct=1 00:25:56.235 bs=262144 00:25:56.235 iodepth=64 00:25:56.235 norandommap=1 00:25:56.235 numjobs=1 00:25:56.235 00:25:56.235 [job0] 00:25:56.235 filename=/dev/nvme0n1 00:25:56.235 [job1] 00:25:56.235 filename=/dev/nvme10n1 00:25:56.235 [job2] 00:25:56.235 filename=/dev/nvme1n1 00:25:56.236 [job3] 00:25:56.236 filename=/dev/nvme2n1 00:25:56.236 [job4] 00:25:56.236 filename=/dev/nvme3n1 00:25:56.236 [job5] 00:25:56.236 filename=/dev/nvme4n1 00:25:56.236 [job6] 00:25:56.236 filename=/dev/nvme5n1 00:25:56.236 [job7] 00:25:56.236 filename=/dev/nvme6n1 00:25:56.236 [job8] 00:25:56.236 filename=/dev/nvme7n1 00:25:56.236 [job9] 00:25:56.236 filename=/dev/nvme8n1 00:25:56.236 [job10] 00:25:56.236 filename=/dev/nvme9n1 00:25:56.236 Could not set queue depth (nvme0n1) 00:25:56.236 Could not set queue depth (nvme10n1) 00:25:56.236 Could not set queue depth (nvme1n1) 00:25:56.236 Could not set queue depth (nvme2n1) 00:25:56.236 Could not set queue depth (nvme3n1) 00:25:56.236 Could not set queue depth (nvme4n1) 00:25:56.236 Could not set queue depth (nvme5n1) 00:25:56.236 Could not set queue depth (nvme6n1) 00:25:56.236 Could not set queue depth (nvme7n1) 00:25:56.236 Could not set queue depth (nvme8n1) 00:25:56.236 Could not set queue depth (nvme9n1) 00:25:56.236 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.236 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.236 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.236 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.236 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.236 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.236 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.236 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.236 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.236 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.236 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:56.236 fio-3.35 00:25:56.236 Starting 11 threads 00:26:06.209 00:26:06.209 job0: (groupid=0, jobs=1): err= 0: pid=717115: Fri Jul 26 16:31:25 2024 00:26:06.209 write: IOPS=258, BW=64.5MiB/s (67.7MB/s)(656MiB/10161msec); 0 zone resets 00:26:06.209 slat (usec): min=25, max=99512, avg=3119.02, stdev=7480.65 00:26:06.209 clat (msec): min=5, max=617, avg=244.59, stdev=107.47 00:26:06.209 lat (msec): min=7, max=617, avg=247.71, stdev=109.20 00:26:06.209 clat percentiles (msec): 00:26:06.209 | 1.00th=[ 27], 5.00th=[ 68], 10.00th=[ 104], 20.00th=[ 176], 00:26:06.209 | 30.00th=[ 201], 40.00th=[ 218], 50.00th=[ 241], 60.00th=[ 255], 00:26:06.209 | 70.00th=[ 275], 80.00th=[ 313], 90.00th=[ 376], 95.00th=[ 443], 00:26:06.209 | 99.00th=[ 584], 99.50th=[ 600], 99.90th=[ 617], 99.95th=[ 617], 00:26:06.209 | 99.99th=[ 617] 00:26:06.209 bw ( KiB/s): min=26624, max=100864, per=6.05%, avg=65521.60, stdev=19903.34, samples=20 00:26:06.209 iops : min= 104, max= 394, avg=255.90, stdev=77.74, samples=20 00:26:06.209 lat (msec) : 10=0.11%, 20=0.50%, 50=2.74%, 100=6.06%, 250=47.62% 00:26:06.209 lat (msec) : 500=39.57%, 750=3.39% 00:26:06.209 cpu : usr=0.76%, sys=1.00%, ctx=1265, majf=0, minf=1 00:26:06.209 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:06.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.209 issued rwts: total=0,2623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.209 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.209 job1: (groupid=0, jobs=1): err= 0: pid=717127: Fri Jul 26 16:31:25 2024 00:26:06.209 write: IOPS=497, BW=124MiB/s (130MB/s)(1265MiB/10166msec); 0 zone resets 00:26:06.209 slat (usec): min=19, max=81981, avg=1388.12, stdev=3923.58 00:26:06.210 clat (usec): min=1575, max=455382, avg=127151.65, stdev=86266.48 00:26:06.210 lat (usec): min=1610, max=463477, avg=128539.77, stdev=87181.50 00:26:06.210 clat percentiles (msec): 00:26:06.210 | 1.00th=[ 5], 5.00th=[ 22], 10.00th=[ 37], 20.00th=[ 53], 00:26:06.210 | 30.00th=[ 62], 40.00th=[ 88], 50.00th=[ 103], 60.00th=[ 134], 00:26:06.210 | 70.00th=[ 176], 80.00th=[ 194], 90.00th=[ 249], 95.00th=[ 292], 00:26:06.210 | 99.00th=[ 388], 99.50th=[ 426], 99.90th=[ 447], 99.95th=[ 451], 00:26:06.210 | 99.99th=[ 456] 00:26:06.210 bw ( KiB/s): min=63361, max=293888, per=11.82%, avg=127882.80, stdev=55910.60, samples=20 00:26:06.210 iops : min= 247, max= 1148, avg=499.50, stdev=218.45, samples=20 00:26:06.210 lat (msec) : 2=0.12%, 4=0.43%, 10=1.70%, 20=2.23%, 50=10.44% 00:26:06.210 lat (msec) : 100=31.15%, 250=44.30%, 500=9.63% 00:26:06.210 cpu : usr=1.47%, sys=1.64%, ctx=2610, majf=0, minf=1 00:26:06.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:06.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.210 issued rwts: total=0,5059,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.210 job2: (groupid=0, jobs=1): err= 0: pid=717128: Fri Jul 26 16:31:25 2024 00:26:06.210 write: IOPS=314, BW=78.5MiB/s (82.4MB/s)(792MiB/10086msec); 0 zone resets 00:26:06.210 slat (usec): min=16, max=162707, avg=2594.43, stdev=7079.62 00:26:06.210 clat (msec): min=2, max=481, avg=201.04, stdev=119.13 00:26:06.210 lat (msec): min=3, max=481, avg=203.63, stdev=120.84 00:26:06.210 clat percentiles (msec): 00:26:06.210 | 1.00th=[ 10], 5.00th=[ 26], 10.00th=[ 45], 20.00th=[ 87], 00:26:06.210 | 30.00th=[ 102], 40.00th=[ 153], 50.00th=[ 203], 60.00th=[ 245], 00:26:06.210 | 70.00th=[ 292], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 376], 00:26:06.210 | 99.00th=[ 443], 99.50th=[ 464], 99.90th=[ 472], 99.95th=[ 472], 00:26:06.210 | 99.99th=[ 481] 00:26:06.210 bw ( KiB/s): min=38912, max=161980, per=7.34%, avg=79492.90, stdev=38722.13, samples=20 00:26:06.210 iops : min= 152, max= 632, avg=310.45, stdev=151.21, samples=20 00:26:06.210 lat (msec) : 4=0.13%, 10=1.29%, 20=2.71%, 50=7.32%, 100=16.50% 00:26:06.210 lat (msec) : 250=33.26%, 500=38.78% 00:26:06.210 cpu : usr=0.95%, sys=0.74%, ctx=1650, majf=0, minf=1 00:26:06.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:06.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.210 issued rwts: total=0,3169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.210 job3: (groupid=0, jobs=1): err= 0: pid=717129: Fri Jul 26 16:31:25 2024 00:26:06.210 write: IOPS=460, BW=115MiB/s (121MB/s)(1164MiB/10114msec); 0 zone resets 00:26:06.210 slat (usec): min=19, max=79039, avg=1752.75, stdev=4223.80 00:26:06.210 clat (usec): min=1356, max=363544, avg=137225.89, stdev=69608.77 00:26:06.210 lat (usec): min=1403, max=366688, avg=138978.64, stdev=70396.02 00:26:06.210 clat percentiles (msec): 00:26:06.210 | 1.00th=[ 11], 5.00th=[ 28], 10.00th=[ 55], 20.00th=[ 93], 00:26:06.210 | 30.00th=[ 100], 40.00th=[ 118], 50.00th=[ 130], 60.00th=[ 140], 00:26:06.210 | 70.00th=[ 153], 80.00th=[ 184], 90.00th=[ 243], 95.00th=[ 292], 00:26:06.210 | 99.00th=[ 330], 99.50th=[ 342], 99.90th=[ 359], 99.95th=[ 363], 00:26:06.210 | 99.99th=[ 363] 00:26:06.210 bw ( KiB/s): min=60928, max=176640, per=10.86%, avg=117524.10, stdev=33185.70, samples=20 00:26:06.210 iops : min= 238, max= 690, avg=459.05, stdev=129.67, samples=20 00:26:06.210 lat (msec) : 2=0.04%, 4=0.02%, 10=0.73%, 20=2.62%, 50=5.65% 00:26:06.210 lat (msec) : 100=21.94%, 250=59.86%, 500=9.13% 00:26:06.210 cpu : usr=1.15%, sys=1.39%, ctx=2014, majf=0, minf=1 00:26:06.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:06.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.210 issued rwts: total=0,4654,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.210 job4: (groupid=0, jobs=1): err= 0: pid=717130: Fri Jul 26 16:31:25 2024 00:26:06.210 write: IOPS=352, BW=88.2MiB/s (92.5MB/s)(897MiB/10167msec); 0 zone resets 00:26:06.210 slat (usec): min=16, max=133110, avg=2009.35, stdev=6588.91 00:26:06.210 clat (msec): min=2, max=498, avg=179.31, stdev=110.88 00:26:06.210 lat (msec): min=2, max=522, avg=181.32, stdev=112.25 00:26:06.210 clat percentiles (msec): 00:26:06.210 | 1.00th=[ 11], 5.00th=[ 20], 10.00th=[ 34], 20.00th=[ 61], 00:26:06.210 | 30.00th=[ 100], 40.00th=[ 132], 50.00th=[ 188], 60.00th=[ 207], 00:26:06.210 | 70.00th=[ 247], 80.00th=[ 284], 90.00th=[ 330], 95.00th=[ 355], 00:26:06.210 | 99.00th=[ 443], 99.50th=[ 456], 99.90th=[ 498], 99.95th=[ 498], 00:26:06.210 | 99.99th=[ 498] 00:26:06.210 bw ( KiB/s): min=46592, max=161469, per=8.33%, avg=90189.80, stdev=30676.55, samples=20 00:26:06.210 iops : min= 182, max= 630, avg=352.25, stdev=119.74, samples=20 00:26:06.210 lat (msec) : 4=0.17%, 10=0.72%, 20=4.82%, 50=10.37%, 100=14.19% 00:26:06.210 lat (msec) : 250=40.42%, 500=29.30% 00:26:06.210 cpu : usr=1.09%, sys=1.09%, ctx=2114, majf=0, minf=1 00:26:06.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:26:06.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.210 issued rwts: total=0,3587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.210 job5: (groupid=0, jobs=1): err= 0: pid=717131: Fri Jul 26 16:31:25 2024 00:26:06.210 write: IOPS=376, BW=94.0MiB/s (98.6MB/s)(949MiB/10087msec); 0 zone resets 00:26:06.210 slat (usec): min=19, max=146944, avg=2111.82, stdev=5915.27 00:26:06.210 clat (msec): min=3, max=456, avg=167.54, stdev=104.40 00:26:06.210 lat (msec): min=3, max=456, avg=169.65, stdev=105.88 00:26:06.210 clat percentiles (msec): 00:26:06.210 | 1.00th=[ 13], 5.00th=[ 39], 10.00th=[ 54], 20.00th=[ 70], 00:26:06.210 | 30.00th=[ 92], 40.00th=[ 117], 50.00th=[ 142], 60.00th=[ 176], 00:26:06.210 | 70.00th=[ 215], 80.00th=[ 275], 90.00th=[ 334], 95.00th=[ 359], 00:26:06.210 | 99.00th=[ 401], 99.50th=[ 430], 99.90th=[ 451], 99.95th=[ 456], 00:26:06.210 | 99.99th=[ 456] 00:26:06.210 bw ( KiB/s): min=43008, max=226816, per=8.82%, avg=95489.00, stdev=51026.16, samples=20 00:26:06.210 iops : min= 168, max= 886, avg=372.95, stdev=199.31, samples=20 00:26:06.210 lat (msec) : 4=0.05%, 10=0.84%, 20=1.03%, 50=6.51%, 100=25.22% 00:26:06.210 lat (msec) : 250=42.12%, 500=24.22% 00:26:06.210 cpu : usr=1.25%, sys=1.28%, ctx=1884, majf=0, minf=1 00:26:06.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:06.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.210 issued rwts: total=0,3794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.210 job6: (groupid=0, jobs=1): err= 0: pid=717132: Fri Jul 26 16:31:25 2024 00:26:06.210 write: IOPS=344, BW=86.2MiB/s (90.4MB/s)(877MiB/10167msec); 0 zone resets 00:26:06.210 slat (usec): min=19, max=251584, avg=1961.29, stdev=6808.88 00:26:06.210 clat (msec): min=5, max=721, avg=182.98, stdev=102.20 00:26:06.210 lat (msec): min=5, max=721, avg=184.94, stdev=103.32 00:26:06.210 clat percentiles (msec): 00:26:06.210 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 48], 20.00th=[ 95], 00:26:06.210 | 30.00th=[ 142], 40.00th=[ 161], 50.00th=[ 182], 60.00th=[ 194], 00:26:06.210 | 70.00th=[ 211], 80.00th=[ 249], 90.00th=[ 300], 95.00th=[ 372], 00:26:06.210 | 99.00th=[ 523], 99.50th=[ 609], 99.90th=[ 684], 99.95th=[ 709], 00:26:06.210 | 99.99th=[ 726] 00:26:06.210 bw ( KiB/s): min=41472, max=165888, per=8.14%, avg=88125.00, stdev=32794.97, samples=20 00:26:06.210 iops : min= 162, max= 648, avg=344.20, stdev=128.12, samples=20 00:26:06.210 lat (msec) : 10=0.43%, 20=1.25%, 50=8.59%, 100=12.15%, 250=57.93% 00:26:06.210 lat (msec) : 500=18.48%, 750=1.17% 00:26:06.210 cpu : usr=0.95%, sys=1.31%, ctx=2025, majf=0, minf=1 00:26:06.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:06.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.210 issued rwts: total=0,3506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.210 job7: (groupid=0, jobs=1): err= 0: pid=717133: Fri Jul 26 16:31:25 2024 00:26:06.210 write: IOPS=306, BW=76.5MiB/s (80.3MB/s)(778MiB/10160msec); 0 zone resets 00:26:06.210 slat (usec): min=23, max=114783, avg=2309.63, stdev=6530.42 00:26:06.210 clat (msec): min=3, max=610, avg=206.55, stdev=118.50 00:26:06.210 lat (msec): min=3, max=619, avg=208.86, stdev=120.13 00:26:06.210 clat percentiles (msec): 00:26:06.210 | 1.00th=[ 12], 5.00th=[ 39], 10.00th=[ 53], 20.00th=[ 102], 00:26:06.210 | 30.00th=[ 134], 40.00th=[ 155], 50.00th=[ 205], 60.00th=[ 234], 00:26:06.210 | 70.00th=[ 271], 80.00th=[ 313], 90.00th=[ 359], 95.00th=[ 380], 00:26:06.210 | 99.00th=[ 567], 99.50th=[ 584], 99.90th=[ 600], 99.95th=[ 600], 00:26:06.210 | 99.99th=[ 609] 00:26:06.210 bw ( KiB/s): min=42922, max=133120, per=7.21%, avg=78022.00, stdev=29655.37, samples=20 00:26:06.210 iops : min= 167, max= 520, avg=304.70, stdev=115.91, samples=20 00:26:06.210 lat (msec) : 4=0.06%, 10=0.64%, 20=1.58%, 50=6.72%, 100=10.74% 00:26:06.210 lat (msec) : 250=45.71%, 500=32.43%, 750=2.12% 00:26:06.210 cpu : usr=0.92%, sys=1.12%, ctx=1785, majf=0, minf=1 00:26:06.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:06.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.210 issued rwts: total=0,3111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.210 job8: (groupid=0, jobs=1): err= 0: pid=717134: Fri Jul 26 16:31:25 2024 00:26:06.210 write: IOPS=468, BW=117MiB/s (123MB/s)(1185MiB/10111msec); 0 zone resets 00:26:06.210 slat (usec): min=19, max=204712, avg=1366.68, stdev=5922.50 00:26:06.211 clat (usec): min=1350, max=612730, avg=135107.29, stdev=106962.20 00:26:06.211 lat (usec): min=1387, max=612805, avg=136473.96, stdev=108451.97 00:26:06.211 clat percentiles (msec): 00:26:06.211 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 23], 20.00th=[ 41], 00:26:06.211 | 30.00th=[ 67], 40.00th=[ 95], 50.00th=[ 108], 60.00th=[ 130], 00:26:06.211 | 70.00th=[ 184], 80.00th=[ 222], 90.00th=[ 259], 95.00th=[ 296], 00:26:06.211 | 99.00th=[ 558], 99.50th=[ 567], 99.90th=[ 600], 99.95th=[ 617], 00:26:06.211 | 99.99th=[ 617] 00:26:06.211 bw ( KiB/s): min=23086, max=217600, per=11.06%, avg=119681.55, stdev=56216.25, samples=20 00:26:06.211 iops : min= 90, max= 850, avg=467.45, stdev=219.60, samples=20 00:26:06.211 lat (msec) : 2=0.11%, 4=0.46%, 10=3.29%, 20=5.02%, 50=14.81% 00:26:06.211 lat (msec) : 100=20.13%, 250=43.93%, 500=10.64%, 750=1.60% 00:26:06.211 cpu : usr=1.39%, sys=1.53%, ctx=3198, majf=0, minf=1 00:26:06.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:06.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.211 issued rwts: total=0,4739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.211 job9: (groupid=0, jobs=1): err= 0: pid=717135: Fri Jul 26 16:31:25 2024 00:26:06.211 write: IOPS=325, BW=81.4MiB/s (85.4MB/s)(828MiB/10163msec); 0 zone resets 00:26:06.211 slat (usec): min=18, max=122546, avg=2444.03, stdev=6381.05 00:26:06.211 clat (msec): min=9, max=516, avg=193.96, stdev=115.76 00:26:06.211 lat (msec): min=10, max=516, avg=196.40, stdev=117.17 00:26:06.211 clat percentiles (msec): 00:26:06.211 | 1.00th=[ 27], 5.00th=[ 49], 10.00th=[ 63], 20.00th=[ 80], 00:26:06.211 | 30.00th=[ 106], 40.00th=[ 120], 50.00th=[ 180], 60.00th=[ 197], 00:26:06.211 | 70.00th=[ 279], 80.00th=[ 321], 90.00th=[ 355], 95.00th=[ 380], 00:26:06.211 | 99.00th=[ 472], 99.50th=[ 498], 99.90th=[ 514], 99.95th=[ 518], 00:26:06.211 | 99.99th=[ 518] 00:26:06.211 bw ( KiB/s): min=38912, max=219136, per=7.68%, avg=83127.35, stdev=48377.73, samples=20 00:26:06.211 iops : min= 152, max= 856, avg=324.65, stdev=189.00, samples=20 00:26:06.211 lat (msec) : 10=0.03%, 20=0.30%, 50=5.08%, 100=20.66%, 250=38.67% 00:26:06.211 lat (msec) : 500=34.80%, 750=0.45% 00:26:06.211 cpu : usr=0.93%, sys=1.12%, ctx=1473, majf=0, minf=1 00:26:06.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:06.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.211 issued rwts: total=0,3310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.211 job10: (groupid=0, jobs=1): err= 0: pid=717136: Fri Jul 26 16:31:25 2024 00:26:06.211 write: IOPS=534, BW=134MiB/s (140MB/s)(1359MiB/10158msec); 0 zone resets 00:26:06.211 slat (usec): min=22, max=69505, avg=1305.59, stdev=3530.98 00:26:06.211 clat (msec): min=2, max=427, avg=118.27, stdev=73.10 00:26:06.211 lat (msec): min=2, max=431, avg=119.58, stdev=73.80 00:26:06.211 clat percentiles (msec): 00:26:06.211 | 1.00th=[ 9], 5.00th=[ 27], 10.00th=[ 36], 20.00th=[ 55], 00:26:06.211 | 30.00th=[ 72], 40.00th=[ 97], 50.00th=[ 110], 60.00th=[ 128], 00:26:06.211 | 70.00th=[ 140], 80.00th=[ 157], 90.00th=[ 230], 95.00th=[ 259], 00:26:06.211 | 99.00th=[ 363], 99.50th=[ 380], 99.90th=[ 414], 99.95th=[ 422], 00:26:06.211 | 99.99th=[ 430] 00:26:06.211 bw ( KiB/s): min=63488, max=225341, per=12.70%, avg=137459.05, stdev=45906.80, samples=20 00:26:06.211 iops : min= 248, max= 880, avg=536.90, stdev=179.33, samples=20 00:26:06.211 lat (msec) : 4=0.17%, 10=1.05%, 20=2.02%, 50=13.49%, 100=29.19% 00:26:06.211 lat (msec) : 250=47.35%, 500=6.74% 00:26:06.211 cpu : usr=1.27%, sys=1.85%, ctx=3020, majf=0, minf=1 00:26:06.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:06.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.211 issued rwts: total=0,5434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.211 00:26:06.211 Run status group 0 (all jobs): 00:26:06.211 WRITE: bw=1057MiB/s (1108MB/s), 64.5MiB/s-134MiB/s (67.7MB/s-140MB/s), io=10.5GiB (11.3GB), run=10086-10167msec 00:26:06.211 00:26:06.211 Disk stats (read/write): 00:26:06.211 nvme0n1: ios=51/5074, merge=0/0, ticks=2049/1207297, in_queue=1209346, util=99.92% 00:26:06.211 nvme10n1: ios=46/9941, merge=0/0, ticks=261/1216239, in_queue=1216500, util=98.00% 00:26:06.211 nvme1n1: ios=0/6122, merge=0/0, ticks=0/1218165, in_queue=1218165, util=97.61% 00:26:06.211 nvme2n1: ios=44/9112, merge=0/0, ticks=971/1214440, in_queue=1215411, util=100.00% 00:26:06.211 nvme3n1: ios=42/7003, merge=0/0, ticks=1459/1211729, in_queue=1213188, util=100.00% 00:26:06.211 nvme4n1: ios=46/7368, merge=0/0, ticks=1371/1215377, in_queue=1216748, util=100.00% 00:26:06.211 nvme5n1: ios=43/6841, merge=0/0, ticks=1018/1213795, in_queue=1214813, util=100.00% 00:26:06.211 nvme6n1: ios=42/6048, merge=0/0, ticks=1359/1211659, in_queue=1213018, util=100.00% 00:26:06.211 nvme7n1: ios=40/9278, merge=0/0, ticks=3646/1218926, in_queue=1222572, util=100.00% 00:26:06.211 nvme8n1: ios=39/6445, merge=0/0, ticks=2532/1211600, in_queue=1214132, util=100.00% 00:26:06.211 nvme9n1: ios=0/10688, merge=0/0, ticks=0/1212036, in_queue=1212036, util=99.05% 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:06.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:06.211 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.211 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:06.777 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:06.777 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:06.777 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:06.777 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:06.777 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:06.777 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:06.777 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:06.777 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:06.777 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:06.777 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.777 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.777 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.777 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.777 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:07.036 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:07.036 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:07.036 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:07.036 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:07.036 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:07.036 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.036 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:07.037 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:07.037 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:07.037 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.037 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.037 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.037 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.037 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:07.297 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:07.297 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:07.297 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:07.297 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:07.297 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:07.297 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.297 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:07.297 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:07.297 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:07.297 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.297 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.297 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.297 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.297 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:07.557 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:07.557 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:07.558 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:07.558 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:07.558 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:07.558 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.558 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:07.558 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:07.558 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:07.558 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.558 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.558 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.558 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.558 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:07.818 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:07.818 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:07.818 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:07.818 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:07.818 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:07.818 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.818 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:07.818 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:07.818 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:07.818 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.818 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.818 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.818 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.818 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:08.077 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:08.077 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:08.077 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:08.077 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:08.077 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:08.077 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:08.077 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:08.077 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:08.077 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:08.077 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.077 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.077 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.077 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.077 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:08.335 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:08.335 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:08.335 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:08.335 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:08.335 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:08.335 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:08.335 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:08.335 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:08.335 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:08.335 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.335 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.335 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.335 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.335 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:08.593 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:08.593 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:08.593 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:08.593 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:08.593 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:08.593 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:08.593 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:08.593 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:08.593 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:08.593 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.593 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.593 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.593 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.593 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:08.851 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:08.851 rmmod nvme_tcp 00:26:08.851 rmmod nvme_fabrics 00:26:08.851 rmmod nvme_keyring 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 711581 ']' 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 711581 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 711581 ']' 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 711581 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:08.851 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 711581 00:26:08.852 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:08.852 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:08.852 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 711581' 00:26:08.852 killing process with pid 711581 00:26:08.852 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 711581 00:26:08.852 16:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 711581 00:26:12.142 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:12.142 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:12.142 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:12.142 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:12.142 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:12.142 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.142 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.142 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.087 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:14.087 00:26:14.087 real 1m5.267s 00:26:14.087 user 3m36.893s 00:26:14.087 sys 0m22.959s 00:26:14.087 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:14.087 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.087 ************************************ 00:26:14.087 END TEST nvmf_multiconnection 00:26:14.087 ************************************ 00:26:14.087 16:31:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:14.087 16:31:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:14.087 16:31:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:14.087 16:31:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:14.087 ************************************ 00:26:14.087 START TEST nvmf_initiator_timeout 00:26:14.087 ************************************ 00:26:14.087 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:14.087 * Looking for test storage... 00:26:14.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:26:14.088 16:31:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:15.993 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:15.993 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:15.993 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.993 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:15.993 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:15.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:15.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:26:15.994 00:26:15.994 --- 10.0.0.2 ping statistics --- 00:26:15.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.994 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:15.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:15.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:26:15.994 00:26:15.994 --- 10.0.0.1 ping statistics --- 00:26:15.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.994 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=720728 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 720728 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 720728 ']' 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:15.994 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:16.252 [2024-07-26 16:31:35.797991] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:16.252 [2024-07-26 16:31:35.798162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.252 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.253 [2024-07-26 16:31:35.961902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:16.511 [2024-07-26 16:31:36.206756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.511 [2024-07-26 16:31:36.206820] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.511 [2024-07-26 16:31:36.206850] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.511 [2024-07-26 16:31:36.206866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.511 [2024-07-26 16:31:36.206882] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.511 [2024-07-26 16:31:36.207009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.511 [2024-07-26 16:31:36.207083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.511 [2024-07-26 16:31:36.207145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.511 [2024-07-26 16:31:36.207153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:17.080 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:17.080 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:17.080 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:17.080 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:17.080 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.341 Malloc0 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.341 Delay0 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.341 [2024-07-26 16:31:36.951516] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.341 [2024-07-26 16:31:36.980845] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.341 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:17.908 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:17.908 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:17.908 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:17.908 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:17.908 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:20.449 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:20.449 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:20.449 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:20.449 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:20.449 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.449 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:20.449 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=721287 00:26:20.449 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:20.449 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:20.449 [global] 00:26:20.449 thread=1 00:26:20.449 invalidate=1 00:26:20.449 rw=write 00:26:20.449 time_based=1 00:26:20.449 runtime=60 00:26:20.449 ioengine=libaio 00:26:20.449 direct=1 00:26:20.449 bs=4096 00:26:20.449 iodepth=1 00:26:20.449 norandommap=0 00:26:20.449 numjobs=1 00:26:20.449 00:26:20.449 verify_dump=1 00:26:20.449 verify_backlog=512 00:26:20.449 verify_state_save=0 00:26:20.449 do_verify=1 00:26:20.449 verify=crc32c-intel 00:26:20.449 [job0] 00:26:20.449 filename=/dev/nvme0n1 00:26:20.449 Could not set queue depth (nvme0n1) 00:26:20.449 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:20.449 fio-3.35 00:26:20.449 Starting 1 thread 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.982 true 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.982 true 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.982 true 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.982 true 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.982 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:26.265 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:26.265 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.265 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.265 true 00:26:26.266 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.266 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:26.266 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.266 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.266 true 00:26:26.266 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.266 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:26.266 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.266 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.266 true 00:26:26.266 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.266 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:26.266 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.266 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.266 true 00:26:26.266 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.266 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:26.266 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 721287 00:27:22.517 00:27:22.517 job0: (groupid=0, jobs=1): err= 0: pid=721356: Fri Jul 26 16:32:39 2024 00:27:22.517 read: IOPS=56, BW=226KiB/s (231kB/s)(13.2MiB/60016msec) 00:27:22.517 slat (nsec): min=5897, max=63386, avg=20861.25, stdev=9412.29 00:27:22.517 clat (usec): min=379, max=41153k, avg=17317.01, stdev=707468.21 00:27:22.517 lat (usec): min=390, max=41153k, avg=17337.87, stdev=707468.20 00:27:22.517 clat percentiles (usec): 00:27:22.517 | 1.00th=[ 412], 5.00th=[ 441], 10.00th=[ 457], 00:27:22.517 | 20.00th=[ 474], 30.00th=[ 486], 40.00th=[ 494], 00:27:22.517 | 50.00th=[ 502], 60.00th=[ 515], 70.00th=[ 529], 00:27:22.517 | 80.00th=[ 545], 90.00th=[ 41157], 95.00th=[ 41157], 00:27:22.517 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 43779], 00:27:22.517 | 99.95th=[ 44827], 99.99th=[17112761] 00:27:22.517 write: IOPS=59, BW=239KiB/s (245kB/s)(14.0MiB/60016msec); 0 zone resets 00:27:22.517 slat (nsec): min=6196, max=71890, avg=19375.30, stdev=11025.56 00:27:22.517 clat (usec): min=249, max=1935, avg=344.67, stdev=73.49 00:27:22.517 lat (usec): min=257, max=1951, avg=364.05, stdev=78.66 00:27:22.517 clat percentiles (usec): 00:27:22.517 | 1.00th=[ 260], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 289], 00:27:22.517 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 338], 00:27:22.517 | 70.00th=[ 371], 80.00th=[ 392], 90.00th=[ 457], 95.00th=[ 498], 00:27:22.517 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[ 562], 99.95th=[ 627], 00:27:22.517 | 99.99th=[ 1942] 00:27:22.517 bw ( KiB/s): min= 4096, max= 5376, per=100.00%, avg=4778.67, stdev=556.81, samples=6 00:27:22.517 iops : min= 1024, max= 1344, avg=1194.67, stdev=139.20, samples=6 00:27:22.517 lat (usec) : 250=0.01%, 500=71.33%, 750=23.05% 00:27:22.517 lat (msec) : 2=0.01%, 50=5.58%, >=2000=0.01% 00:27:22.517 cpu : usr=0.14%, sys=0.27%, ctx=6970, majf=0, minf=2 00:27:22.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:22.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.517 issued rwts: total=3384,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:22.518 00:27:22.518 Run status group 0 (all jobs): 00:27:22.518 READ: bw=226KiB/s (231kB/s), 226KiB/s-226KiB/s (231kB/s-231kB/s), io=13.2MiB (13.9MB), run=60016-60016msec 00:27:22.518 WRITE: bw=239KiB/s (245kB/s), 239KiB/s-239KiB/s (245kB/s-245kB/s), io=14.0MiB (14.7MB), run=60016-60016msec 00:27:22.518 00:27:22.518 Disk stats (read/write): 00:27:22.518 nvme0n1: ios=3480/3584, merge=0/0, ticks=18729/1215, in_queue=19944, util=99.64% 00:27:22.518 16:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:22.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:22.518 nvmf hotplug test: fio successful as expected 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:22.518 rmmod nvme_tcp 00:27:22.518 rmmod nvme_fabrics 00:27:22.518 rmmod nvme_keyring 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 720728 ']' 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 720728 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 720728 ']' 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 720728 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 720728 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 720728' 00:27:22.518 killing process with pid 720728 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 720728 00:27:22.518 16:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 720728 00:27:22.518 16:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:22.518 16:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:22.518 16:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:22.518 16:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:22.518 16:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:22.518 16:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.518 16:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.518 16:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.425 16:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:24.425 00:27:24.425 real 1m10.037s 00:27:24.425 user 4m15.384s 00:27:24.425 sys 0m7.193s 00:27:24.425 16:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:24.425 16:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:24.425 ************************************ 00:27:24.425 END TEST nvmf_initiator_timeout 00:27:24.425 ************************************ 00:27:24.425 16:32:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:27:24.425 16:32:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:27:24.425 16:32:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:27:24.425 16:32:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:27:24.425 16:32:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:26.325 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:26.325 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:26.326 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:26.326 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:26.326 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:26.326 ************************************ 00:27:26.326 START TEST nvmf_perf_adq 00:27:26.326 ************************************ 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:26.326 * Looking for test storage... 00:27:26.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:26.326 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:28.227 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:28.228 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:28.228 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:28.228 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:28.228 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:28.228 16:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:29.166 16:32:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:31.070 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:36.346 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:36.346 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:36.346 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:36.346 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:36.346 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:36.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:27:36.347 00:27:36.347 --- 10.0.0.2 ping statistics --- 00:27:36.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.347 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:36.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:27:36.347 00:27:36.347 --- 10.0.0.1 ping statistics --- 00:27:36.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.347 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=733011 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 733011 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 733011 ']' 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:36.347 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:36.347 [2024-07-26 16:32:55.878644] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:36.347 [2024-07-26 16:32:55.878794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.347 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.347 [2024-07-26 16:32:56.014541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:36.607 [2024-07-26 16:32:56.276151] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.607 [2024-07-26 16:32:56.276209] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.607 [2024-07-26 16:32:56.276235] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.607 [2024-07-26 16:32:56.276255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.607 [2024-07-26 16:32:56.276275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.607 [2024-07-26 16:32:56.276384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.607 [2024-07-26 16:32:56.276476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.607 [2024-07-26 16:32:56.276509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.607 [2024-07-26 16:32:56.276521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.174 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.433 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.433 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:37.433 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.433 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.433 [2024-07-26 16:32:57.190843] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.691 Malloc1 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.691 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.692 [2024-07-26 16:32:57.295567] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.692 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.692 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=733264 00:27:37.692 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:37.692 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:37.692 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.597 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:39.597 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.597 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.597 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.597 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:39.597 "tick_rate": 2700000000, 00:27:39.597 "poll_groups": [ 00:27:39.597 { 00:27:39.597 "name": "nvmf_tgt_poll_group_000", 00:27:39.597 "admin_qpairs": 1, 00:27:39.597 "io_qpairs": 1, 00:27:39.597 "current_admin_qpairs": 1, 00:27:39.597 "current_io_qpairs": 1, 00:27:39.597 "pending_bdev_io": 0, 00:27:39.597 "completed_nvme_io": 17332, 00:27:39.597 "transports": [ 00:27:39.597 { 00:27:39.597 "trtype": "TCP" 00:27:39.597 } 00:27:39.597 ] 00:27:39.597 }, 00:27:39.597 { 00:27:39.597 "name": "nvmf_tgt_poll_group_001", 00:27:39.597 "admin_qpairs": 0, 00:27:39.597 "io_qpairs": 1, 00:27:39.597 "current_admin_qpairs": 0, 00:27:39.597 "current_io_qpairs": 1, 00:27:39.597 "pending_bdev_io": 0, 00:27:39.597 "completed_nvme_io": 17059, 00:27:39.597 "transports": [ 00:27:39.597 { 00:27:39.597 "trtype": "TCP" 00:27:39.597 } 00:27:39.597 ] 00:27:39.597 }, 00:27:39.597 { 00:27:39.597 "name": "nvmf_tgt_poll_group_002", 00:27:39.597 "admin_qpairs": 0, 00:27:39.597 "io_qpairs": 1, 00:27:39.597 "current_admin_qpairs": 0, 00:27:39.597 "current_io_qpairs": 1, 00:27:39.597 "pending_bdev_io": 0, 00:27:39.597 "completed_nvme_io": 13991, 00:27:39.597 "transports": [ 00:27:39.597 { 00:27:39.597 "trtype": "TCP" 00:27:39.597 } 00:27:39.597 ] 00:27:39.597 }, 00:27:39.597 { 00:27:39.597 "name": "nvmf_tgt_poll_group_003", 00:27:39.597 "admin_qpairs": 0, 00:27:39.597 "io_qpairs": 1, 00:27:39.597 "current_admin_qpairs": 0, 00:27:39.597 "current_io_qpairs": 1, 00:27:39.597 "pending_bdev_io": 0, 00:27:39.597 "completed_nvme_io": 17251, 00:27:39.597 "transports": [ 00:27:39.597 { 00:27:39.597 "trtype": "TCP" 00:27:39.597 } 00:27:39.597 ] 00:27:39.597 } 00:27:39.597 ] 00:27:39.597 }' 00:27:39.597 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:39.597 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:39.891 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:39.891 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:39.891 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 733264 00:27:48.010 Initializing NVMe Controllers 00:27:48.010 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:48.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:48.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:48.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:48.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:48.010 Initialization complete. Launching workers. 00:27:48.010 ======================================================== 00:27:48.010 Latency(us) 00:27:48.010 Device Information : IOPS MiB/s Average min max 00:27:48.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9313.39 36.38 6874.77 3744.95 10701.97 00:27:48.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9309.39 36.36 6875.37 2585.25 11457.70 00:27:48.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7567.45 29.56 8461.92 2561.23 12642.98 00:27:48.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9399.99 36.72 6808.45 3508.79 9515.03 00:27:48.010 ======================================================== 00:27:48.010 Total : 35590.23 139.02 7194.88 2561.23 12642.98 00:27:48.010 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:48.010 rmmod nvme_tcp 00:27:48.010 rmmod nvme_fabrics 00:27:48.010 rmmod nvme_keyring 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 733011 ']' 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 733011 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 733011 ']' 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 733011 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 733011 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 733011' 00:27:48.010 killing process with pid 733011 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 733011 00:27:48.010 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 733011 00:27:49.386 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:49.386 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:49.386 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:49.386 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:49.386 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:49.386 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.386 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.386 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.925 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:51.925 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:51.925 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:52.185 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:54.085 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:59.363 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:59.363 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:59.364 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:59.364 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:59.364 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:59.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:27:59.364 00:27:59.364 --- 10.0.0.2 ping statistics --- 00:27:59.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.364 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:27:59.364 00:27:59.364 --- 10.0.0.1 ping statistics --- 00:27:59.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.364 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:59.364 net.core.busy_poll = 1 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:59.364 net.core.busy_read = 1 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=736610 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 736610 00:27:59.364 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 736610 ']' 00:27:59.365 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.365 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.365 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.365 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.365 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.365 [2024-07-26 16:33:19.023669] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:59.365 [2024-07-26 16:33:19.023842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.365 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.624 [2024-07-26 16:33:19.167228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:59.884 [2024-07-26 16:33:19.430852] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.884 [2024-07-26 16:33:19.430925] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.884 [2024-07-26 16:33:19.430955] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.884 [2024-07-26 16:33:19.430978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.884 [2024-07-26 16:33:19.431000] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.884 [2024-07-26 16:33:19.431132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.884 [2024-07-26 16:33:19.431165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.884 [2024-07-26 16:33:19.431226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.884 [2024-07-26 16:33:19.431236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:00.451 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.451 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:00.451 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:00.451 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:00.451 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.451 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.451 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:28:00.451 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:00.451 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:00.451 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.451 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.451 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.451 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:00.451 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:00.451 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.451 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.451 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.451 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:00.451 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.451 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.708 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.708 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:00.708 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.708 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.708 [2024-07-26 16:33:20.383619] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.708 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.708 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:00.708 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.708 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.708 Malloc1 00:28:00.708 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.708 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:00.708 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.708 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.965 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.965 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:00.965 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.965 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.965 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.965 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:00.965 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.965 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.965 [2024-07-26 16:33:20.485832] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.965 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.965 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=736812 00:28:00.965 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:00.965 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:28:00.965 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.869 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:28:02.869 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.869 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:02.869 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.869 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:28:02.869 "tick_rate": 2700000000, 00:28:02.869 "poll_groups": [ 00:28:02.869 { 00:28:02.869 "name": "nvmf_tgt_poll_group_000", 00:28:02.869 "admin_qpairs": 1, 00:28:02.869 "io_qpairs": 2, 00:28:02.869 "current_admin_qpairs": 1, 00:28:02.869 "current_io_qpairs": 2, 00:28:02.869 "pending_bdev_io": 0, 00:28:02.869 "completed_nvme_io": 18039, 00:28:02.869 "transports": [ 00:28:02.869 { 00:28:02.869 "trtype": "TCP" 00:28:02.869 } 00:28:02.869 ] 00:28:02.869 }, 00:28:02.869 { 00:28:02.869 "name": "nvmf_tgt_poll_group_001", 00:28:02.869 "admin_qpairs": 0, 00:28:02.869 "io_qpairs": 2, 00:28:02.869 "current_admin_qpairs": 0, 00:28:02.869 "current_io_qpairs": 2, 00:28:02.869 "pending_bdev_io": 0, 00:28:02.869 "completed_nvme_io": 20245, 00:28:02.869 "transports": [ 00:28:02.869 { 00:28:02.869 "trtype": "TCP" 00:28:02.869 } 00:28:02.869 ] 00:28:02.869 }, 00:28:02.869 { 00:28:02.869 "name": "nvmf_tgt_poll_group_002", 00:28:02.869 "admin_qpairs": 0, 00:28:02.869 "io_qpairs": 0, 00:28:02.869 "current_admin_qpairs": 0, 00:28:02.869 "current_io_qpairs": 0, 00:28:02.869 "pending_bdev_io": 0, 00:28:02.869 "completed_nvme_io": 0, 00:28:02.869 "transports": [ 00:28:02.869 { 00:28:02.869 "trtype": "TCP" 00:28:02.869 } 00:28:02.869 ] 00:28:02.869 }, 00:28:02.869 { 00:28:02.869 "name": "nvmf_tgt_poll_group_003", 00:28:02.869 "admin_qpairs": 0, 00:28:02.869 "io_qpairs": 0, 00:28:02.869 "current_admin_qpairs": 0, 00:28:02.869 "current_io_qpairs": 0, 00:28:02.869 "pending_bdev_io": 0, 00:28:02.869 "completed_nvme_io": 0, 00:28:02.869 "transports": [ 00:28:02.869 { 00:28:02.869 "trtype": "TCP" 00:28:02.869 } 00:28:02.869 ] 00:28:02.869 } 00:28:02.869 ] 00:28:02.869 }' 00:28:02.869 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:02.869 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:28:02.869 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:28:02.869 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:28:02.869 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 736812 00:28:11.011 Initializing NVMe Controllers 00:28:11.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:11.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:11.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:11.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:11.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:11.011 Initialization complete. Launching workers. 00:28:11.011 ======================================================== 00:28:11.011 Latency(us) 00:28:11.011 Device Information : IOPS MiB/s Average min max 00:28:11.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4467.90 17.45 14335.29 2475.92 59345.97 00:28:11.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5828.97 22.77 10980.57 2054.23 55662.14 00:28:11.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5162.29 20.17 12441.45 2343.43 56480.74 00:28:11.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5359.08 20.93 11974.28 2245.82 59481.11 00:28:11.011 ======================================================== 00:28:11.011 Total : 20818.24 81.32 12318.60 2054.23 59481.11 00:28:11.011 00:28:11.011 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:28:11.011 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:11.011 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:11.011 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:11.011 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:11.011 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:11.011 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:11.269 rmmod nvme_tcp 00:28:11.269 rmmod nvme_fabrics 00:28:11.269 rmmod nvme_keyring 00:28:11.269 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:11.269 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:11.269 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:11.269 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 736610 ']' 00:28:11.269 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 736610 00:28:11.269 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 736610 ']' 00:28:11.269 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 736610 00:28:11.269 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:11.269 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:11.270 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 736610 00:28:11.270 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:11.270 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:11.270 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 736610' 00:28:11.270 killing process with pid 736610 00:28:11.270 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 736610 00:28:11.270 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 736610 00:28:12.649 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:12.649 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:12.649 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:12.649 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.649 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.649 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.649 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.649 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.550 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:14.550 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:14.550 00:28:14.550 real 0m48.466s 00:28:14.550 user 2m48.142s 00:28:14.550 sys 0m11.853s 00:28:14.550 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:14.550 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.550 ************************************ 00:28:14.550 END TEST nvmf_perf_adq 00:28:14.550 ************************************ 00:28:14.550 16:33:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:14.550 16:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:14.550 16:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:14.550 16:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:14.808 ************************************ 00:28:14.808 START TEST nvmf_shutdown 00:28:14.808 ************************************ 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:14.808 * Looking for test storage... 00:28:14.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:14.808 ************************************ 00:28:14.808 START TEST nvmf_shutdown_tc1 00:28:14.808 ************************************ 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:14.808 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:16.711 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:16.711 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:16.711 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:16.711 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.711 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.712 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:16.712 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.712 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.712 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:16.712 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:16.712 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.712 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:16.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:28:16.970 00:28:16.970 --- 10.0.0.2 ping statistics --- 00:28:16.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.970 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:16.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:28:16.970 00:28:16.970 --- 10.0.0.1 ping statistics --- 00:28:16.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.970 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=740101 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 740101 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 740101 ']' 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:16.970 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.970 [2024-07-26 16:33:36.707432] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:16.970 [2024-07-26 16:33:36.707565] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.230 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.230 [2024-07-26 16:33:36.845259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:17.489 [2024-07-26 16:33:37.106252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.489 [2024-07-26 16:33:37.106315] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.489 [2024-07-26 16:33:37.106341] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.489 [2024-07-26 16:33:37.106378] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.489 [2024-07-26 16:33:37.106401] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.489 [2024-07-26 16:33:37.106537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.489 [2024-07-26 16:33:37.106697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:17.489 [2024-07-26 16:33:37.106746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.489 [2024-07-26 16:33:37.106753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:18.054 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:18.054 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:18.054 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:18.054 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:18.054 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.054 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.054 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.055 [2024-07-26 16:33:37.667872] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.055 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.055 Malloc1 00:28:18.055 [2024-07-26 16:33:37.794375] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.313 Malloc2 00:28:18.313 Malloc3 00:28:18.313 Malloc4 00:28:18.571 Malloc5 00:28:18.571 Malloc6 00:28:18.830 Malloc7 00:28:18.830 Malloc8 00:28:18.830 Malloc9 00:28:19.091 Malloc10 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=740410 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 740410 /var/tmp/bdevperf.sock 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 740410 ']' 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:19.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.091 { 00:28:19.091 "params": { 00:28:19.091 "name": "Nvme$subsystem", 00:28:19.091 "trtype": "$TEST_TRANSPORT", 00:28:19.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.091 "adrfam": "ipv4", 00:28:19.091 "trsvcid": "$NVMF_PORT", 00:28:19.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.091 "hdgst": ${hdgst:-false}, 00:28:19.091 "ddgst": ${ddgst:-false} 00:28:19.091 }, 00:28:19.091 "method": "bdev_nvme_attach_controller" 00:28:19.091 } 00:28:19.091 EOF 00:28:19.091 )") 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.091 { 00:28:19.091 "params": { 00:28:19.091 "name": "Nvme$subsystem", 00:28:19.091 "trtype": "$TEST_TRANSPORT", 00:28:19.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.091 "adrfam": "ipv4", 00:28:19.091 "trsvcid": "$NVMF_PORT", 00:28:19.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.091 "hdgst": ${hdgst:-false}, 00:28:19.091 "ddgst": ${ddgst:-false} 00:28:19.091 }, 00:28:19.091 "method": "bdev_nvme_attach_controller" 00:28:19.091 } 00:28:19.091 EOF 00:28:19.091 )") 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.091 { 00:28:19.091 "params": { 00:28:19.091 "name": "Nvme$subsystem", 00:28:19.091 "trtype": "$TEST_TRANSPORT", 00:28:19.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.091 "adrfam": "ipv4", 00:28:19.091 "trsvcid": "$NVMF_PORT", 00:28:19.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.091 "hdgst": ${hdgst:-false}, 00:28:19.091 "ddgst": ${ddgst:-false} 00:28:19.091 }, 00:28:19.091 "method": "bdev_nvme_attach_controller" 00:28:19.091 } 00:28:19.091 EOF 00:28:19.091 )") 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.091 { 00:28:19.091 "params": { 00:28:19.091 "name": "Nvme$subsystem", 00:28:19.091 "trtype": "$TEST_TRANSPORT", 00:28:19.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.091 "adrfam": "ipv4", 00:28:19.091 "trsvcid": "$NVMF_PORT", 00:28:19.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.091 "hdgst": ${hdgst:-false}, 00:28:19.091 "ddgst": ${ddgst:-false} 00:28:19.091 }, 00:28:19.091 "method": "bdev_nvme_attach_controller" 00:28:19.091 } 00:28:19.091 EOF 00:28:19.091 )") 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.091 { 00:28:19.091 "params": { 00:28:19.091 "name": "Nvme$subsystem", 00:28:19.091 "trtype": "$TEST_TRANSPORT", 00:28:19.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.091 "adrfam": "ipv4", 00:28:19.091 "trsvcid": "$NVMF_PORT", 00:28:19.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.091 "hdgst": ${hdgst:-false}, 00:28:19.091 "ddgst": ${ddgst:-false} 00:28:19.091 }, 00:28:19.091 "method": "bdev_nvme_attach_controller" 00:28:19.091 } 00:28:19.091 EOF 00:28:19.091 )") 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.091 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.091 { 00:28:19.091 "params": { 00:28:19.091 "name": "Nvme$subsystem", 00:28:19.091 "trtype": "$TEST_TRANSPORT", 00:28:19.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.091 "adrfam": "ipv4", 00:28:19.091 "trsvcid": "$NVMF_PORT", 00:28:19.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.092 "hdgst": ${hdgst:-false}, 00:28:19.092 "ddgst": ${ddgst:-false} 00:28:19.092 }, 00:28:19.092 "method": "bdev_nvme_attach_controller" 00:28:19.092 } 00:28:19.092 EOF 00:28:19.092 )") 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.092 { 00:28:19.092 "params": { 00:28:19.092 "name": "Nvme$subsystem", 00:28:19.092 "trtype": "$TEST_TRANSPORT", 00:28:19.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.092 "adrfam": "ipv4", 00:28:19.092 "trsvcid": "$NVMF_PORT", 00:28:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.092 "hdgst": ${hdgst:-false}, 00:28:19.092 "ddgst": ${ddgst:-false} 00:28:19.092 }, 00:28:19.092 "method": "bdev_nvme_attach_controller" 00:28:19.092 } 00:28:19.092 EOF 00:28:19.092 )") 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.092 { 00:28:19.092 "params": { 00:28:19.092 "name": "Nvme$subsystem", 00:28:19.092 "trtype": "$TEST_TRANSPORT", 00:28:19.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.092 "adrfam": "ipv4", 00:28:19.092 "trsvcid": "$NVMF_PORT", 00:28:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.092 "hdgst": ${hdgst:-false}, 00:28:19.092 "ddgst": ${ddgst:-false} 00:28:19.092 }, 00:28:19.092 "method": "bdev_nvme_attach_controller" 00:28:19.092 } 00:28:19.092 EOF 00:28:19.092 )") 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.092 { 00:28:19.092 "params": { 00:28:19.092 "name": "Nvme$subsystem", 00:28:19.092 "trtype": "$TEST_TRANSPORT", 00:28:19.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.092 "adrfam": "ipv4", 00:28:19.092 "trsvcid": "$NVMF_PORT", 00:28:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.092 "hdgst": ${hdgst:-false}, 00:28:19.092 "ddgst": ${ddgst:-false} 00:28:19.092 }, 00:28:19.092 "method": "bdev_nvme_attach_controller" 00:28:19.092 } 00:28:19.092 EOF 00:28:19.092 )") 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.092 { 00:28:19.092 "params": { 00:28:19.092 "name": "Nvme$subsystem", 00:28:19.092 "trtype": "$TEST_TRANSPORT", 00:28:19.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.092 "adrfam": "ipv4", 00:28:19.092 "trsvcid": "$NVMF_PORT", 00:28:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.092 "hdgst": ${hdgst:-false}, 00:28:19.092 "ddgst": ${ddgst:-false} 00:28:19.092 }, 00:28:19.092 "method": "bdev_nvme_attach_controller" 00:28:19.092 } 00:28:19.092 EOF 00:28:19.092 )") 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:19.092 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:19.092 "params": { 00:28:19.092 "name": "Nvme1", 00:28:19.092 "trtype": "tcp", 00:28:19.092 "traddr": "10.0.0.2", 00:28:19.092 "adrfam": "ipv4", 00:28:19.092 "trsvcid": "4420", 00:28:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:19.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:19.092 "hdgst": false, 00:28:19.092 "ddgst": false 00:28:19.092 }, 00:28:19.092 "method": "bdev_nvme_attach_controller" 00:28:19.092 },{ 00:28:19.092 "params": { 00:28:19.092 "name": "Nvme2", 00:28:19.092 "trtype": "tcp", 00:28:19.092 "traddr": "10.0.0.2", 00:28:19.092 "adrfam": "ipv4", 00:28:19.092 "trsvcid": "4420", 00:28:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:19.092 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:19.092 "hdgst": false, 00:28:19.092 "ddgst": false 00:28:19.092 }, 00:28:19.092 "method": "bdev_nvme_attach_controller" 00:28:19.092 },{ 00:28:19.092 "params": { 00:28:19.092 "name": "Nvme3", 00:28:19.092 "trtype": "tcp", 00:28:19.092 "traddr": "10.0.0.2", 00:28:19.092 "adrfam": "ipv4", 00:28:19.092 "trsvcid": "4420", 00:28:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:19.092 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:19.092 "hdgst": false, 00:28:19.092 "ddgst": false 00:28:19.092 }, 00:28:19.092 "method": "bdev_nvme_attach_controller" 00:28:19.092 },{ 00:28:19.092 "params": { 00:28:19.092 "name": "Nvme4", 00:28:19.092 "trtype": "tcp", 00:28:19.092 "traddr": "10.0.0.2", 00:28:19.092 "adrfam": "ipv4", 00:28:19.092 "trsvcid": "4420", 00:28:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:19.092 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:19.092 "hdgst": false, 00:28:19.092 "ddgst": false 00:28:19.092 }, 00:28:19.092 "method": "bdev_nvme_attach_controller" 00:28:19.092 },{ 00:28:19.092 "params": { 00:28:19.092 "name": "Nvme5", 00:28:19.092 "trtype": "tcp", 00:28:19.092 "traddr": "10.0.0.2", 00:28:19.092 "adrfam": "ipv4", 00:28:19.092 "trsvcid": "4420", 00:28:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:19.092 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:19.092 "hdgst": false, 00:28:19.092 "ddgst": false 00:28:19.092 }, 00:28:19.092 "method": "bdev_nvme_attach_controller" 00:28:19.092 },{ 00:28:19.092 "params": { 00:28:19.092 "name": "Nvme6", 00:28:19.092 "trtype": "tcp", 00:28:19.092 "traddr": "10.0.0.2", 00:28:19.092 "adrfam": "ipv4", 00:28:19.092 "trsvcid": "4420", 00:28:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:19.092 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:19.092 "hdgst": false, 00:28:19.092 "ddgst": false 00:28:19.092 }, 00:28:19.092 "method": "bdev_nvme_attach_controller" 00:28:19.092 },{ 00:28:19.092 "params": { 00:28:19.092 "name": "Nvme7", 00:28:19.092 "trtype": "tcp", 00:28:19.092 "traddr": "10.0.0.2", 00:28:19.092 "adrfam": "ipv4", 00:28:19.092 "trsvcid": "4420", 00:28:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:19.092 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:19.092 "hdgst": false, 00:28:19.092 "ddgst": false 00:28:19.092 }, 00:28:19.092 "method": "bdev_nvme_attach_controller" 00:28:19.092 },{ 00:28:19.092 "params": { 00:28:19.092 "name": "Nvme8", 00:28:19.092 "trtype": "tcp", 00:28:19.092 "traddr": "10.0.0.2", 00:28:19.092 "adrfam": "ipv4", 00:28:19.092 "trsvcid": "4420", 00:28:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:19.092 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:19.092 "hdgst": false, 00:28:19.092 "ddgst": false 00:28:19.092 }, 00:28:19.092 "method": "bdev_nvme_attach_controller" 00:28:19.092 },{ 00:28:19.092 "params": { 00:28:19.092 "name": "Nvme9", 00:28:19.092 "trtype": "tcp", 00:28:19.092 "traddr": "10.0.0.2", 00:28:19.092 "adrfam": "ipv4", 00:28:19.092 "trsvcid": "4420", 00:28:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:19.092 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:19.092 "hdgst": false, 00:28:19.092 "ddgst": false 00:28:19.092 }, 00:28:19.092 "method": "bdev_nvme_attach_controller" 00:28:19.092 },{ 00:28:19.092 "params": { 00:28:19.092 "name": "Nvme10", 00:28:19.092 "trtype": "tcp", 00:28:19.092 "traddr": "10.0.0.2", 00:28:19.092 "adrfam": "ipv4", 00:28:19.092 "trsvcid": "4420", 00:28:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:19.092 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:19.092 "hdgst": false, 00:28:19.092 "ddgst": false 00:28:19.092 }, 00:28:19.092 "method": "bdev_nvme_attach_controller" 00:28:19.092 }' 00:28:19.092 [2024-07-26 16:33:38.798541] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:19.093 [2024-07-26 16:33:38.798689] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:19.351 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.351 [2024-07-26 16:33:38.927706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.610 [2024-07-26 16:33:39.168280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.140 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:22.140 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:22.140 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:22.140 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.140 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:22.140 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.140 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 740410 00:28:22.140 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:28:22.140 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:28:23.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 740410 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 740101 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.075 { 00:28:23.075 "params": { 00:28:23.075 "name": "Nvme$subsystem", 00:28:23.075 "trtype": "$TEST_TRANSPORT", 00:28:23.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.075 "adrfam": "ipv4", 00:28:23.075 "trsvcid": "$NVMF_PORT", 00:28:23.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.075 "hdgst": ${hdgst:-false}, 00:28:23.075 "ddgst": ${ddgst:-false} 00:28:23.075 }, 00:28:23.075 "method": "bdev_nvme_attach_controller" 00:28:23.075 } 00:28:23.075 EOF 00:28:23.075 )") 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.075 { 00:28:23.075 "params": { 00:28:23.075 "name": "Nvme$subsystem", 00:28:23.075 "trtype": "$TEST_TRANSPORT", 00:28:23.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.075 "adrfam": "ipv4", 00:28:23.075 "trsvcid": "$NVMF_PORT", 00:28:23.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.075 "hdgst": ${hdgst:-false}, 00:28:23.075 "ddgst": ${ddgst:-false} 00:28:23.075 }, 00:28:23.075 "method": "bdev_nvme_attach_controller" 00:28:23.075 } 00:28:23.075 EOF 00:28:23.075 )") 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.075 { 00:28:23.075 "params": { 00:28:23.075 "name": "Nvme$subsystem", 00:28:23.075 "trtype": "$TEST_TRANSPORT", 00:28:23.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.075 "adrfam": "ipv4", 00:28:23.075 "trsvcid": "$NVMF_PORT", 00:28:23.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.075 "hdgst": ${hdgst:-false}, 00:28:23.075 "ddgst": ${ddgst:-false} 00:28:23.075 }, 00:28:23.075 "method": "bdev_nvme_attach_controller" 00:28:23.075 } 00:28:23.075 EOF 00:28:23.075 )") 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.075 { 00:28:23.075 "params": { 00:28:23.075 "name": "Nvme$subsystem", 00:28:23.075 "trtype": "$TEST_TRANSPORT", 00:28:23.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.075 "adrfam": "ipv4", 00:28:23.075 "trsvcid": "$NVMF_PORT", 00:28:23.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.075 "hdgst": ${hdgst:-false}, 00:28:23.075 "ddgst": ${ddgst:-false} 00:28:23.075 }, 00:28:23.075 "method": "bdev_nvme_attach_controller" 00:28:23.075 } 00:28:23.075 EOF 00:28:23.075 )") 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.075 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.075 { 00:28:23.075 "params": { 00:28:23.075 "name": "Nvme$subsystem", 00:28:23.075 "trtype": "$TEST_TRANSPORT", 00:28:23.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.075 "adrfam": "ipv4", 00:28:23.075 "trsvcid": "$NVMF_PORT", 00:28:23.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.076 "hdgst": ${hdgst:-false}, 00:28:23.076 "ddgst": ${ddgst:-false} 00:28:23.076 }, 00:28:23.076 "method": "bdev_nvme_attach_controller" 00:28:23.076 } 00:28:23.076 EOF 00:28:23.076 )") 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.076 { 00:28:23.076 "params": { 00:28:23.076 "name": "Nvme$subsystem", 00:28:23.076 "trtype": "$TEST_TRANSPORT", 00:28:23.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.076 "adrfam": "ipv4", 00:28:23.076 "trsvcid": "$NVMF_PORT", 00:28:23.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.076 "hdgst": ${hdgst:-false}, 00:28:23.076 "ddgst": ${ddgst:-false} 00:28:23.076 }, 00:28:23.076 "method": "bdev_nvme_attach_controller" 00:28:23.076 } 00:28:23.076 EOF 00:28:23.076 )") 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.076 { 00:28:23.076 "params": { 00:28:23.076 "name": "Nvme$subsystem", 00:28:23.076 "trtype": "$TEST_TRANSPORT", 00:28:23.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.076 "adrfam": "ipv4", 00:28:23.076 "trsvcid": "$NVMF_PORT", 00:28:23.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.076 "hdgst": ${hdgst:-false}, 00:28:23.076 "ddgst": ${ddgst:-false} 00:28:23.076 }, 00:28:23.076 "method": "bdev_nvme_attach_controller" 00:28:23.076 } 00:28:23.076 EOF 00:28:23.076 )") 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.076 { 00:28:23.076 "params": { 00:28:23.076 "name": "Nvme$subsystem", 00:28:23.076 "trtype": "$TEST_TRANSPORT", 00:28:23.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.076 "adrfam": "ipv4", 00:28:23.076 "trsvcid": "$NVMF_PORT", 00:28:23.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.076 "hdgst": ${hdgst:-false}, 00:28:23.076 "ddgst": ${ddgst:-false} 00:28:23.076 }, 00:28:23.076 "method": "bdev_nvme_attach_controller" 00:28:23.076 } 00:28:23.076 EOF 00:28:23.076 )") 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.076 { 00:28:23.076 "params": { 00:28:23.076 "name": "Nvme$subsystem", 00:28:23.076 "trtype": "$TEST_TRANSPORT", 00:28:23.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.076 "adrfam": "ipv4", 00:28:23.076 "trsvcid": "$NVMF_PORT", 00:28:23.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.076 "hdgst": ${hdgst:-false}, 00:28:23.076 "ddgst": ${ddgst:-false} 00:28:23.076 }, 00:28:23.076 "method": "bdev_nvme_attach_controller" 00:28:23.076 } 00:28:23.076 EOF 00:28:23.076 )") 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.076 { 00:28:23.076 "params": { 00:28:23.076 "name": "Nvme$subsystem", 00:28:23.076 "trtype": "$TEST_TRANSPORT", 00:28:23.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.076 "adrfam": "ipv4", 00:28:23.076 "trsvcid": "$NVMF_PORT", 00:28:23.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.076 "hdgst": ${hdgst:-false}, 00:28:23.076 "ddgst": ${ddgst:-false} 00:28:23.076 }, 00:28:23.076 "method": "bdev_nvme_attach_controller" 00:28:23.076 } 00:28:23.076 EOF 00:28:23.076 )") 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:23.076 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:23.076 "params": { 00:28:23.076 "name": "Nvme1", 00:28:23.076 "trtype": "tcp", 00:28:23.076 "traddr": "10.0.0.2", 00:28:23.076 "adrfam": "ipv4", 00:28:23.076 "trsvcid": "4420", 00:28:23.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:23.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:23.076 "hdgst": false, 00:28:23.076 "ddgst": false 00:28:23.076 }, 00:28:23.076 "method": "bdev_nvme_attach_controller" 00:28:23.076 },{ 00:28:23.076 "params": { 00:28:23.076 "name": "Nvme2", 00:28:23.076 "trtype": "tcp", 00:28:23.076 "traddr": "10.0.0.2", 00:28:23.076 "adrfam": "ipv4", 00:28:23.076 "trsvcid": "4420", 00:28:23.076 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:23.076 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:23.076 "hdgst": false, 00:28:23.076 "ddgst": false 00:28:23.076 }, 00:28:23.076 "method": "bdev_nvme_attach_controller" 00:28:23.076 },{ 00:28:23.076 "params": { 00:28:23.076 "name": "Nvme3", 00:28:23.076 "trtype": "tcp", 00:28:23.076 "traddr": "10.0.0.2", 00:28:23.076 "adrfam": "ipv4", 00:28:23.076 "trsvcid": "4420", 00:28:23.076 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:23.076 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:23.076 "hdgst": false, 00:28:23.076 "ddgst": false 00:28:23.076 }, 00:28:23.076 "method": "bdev_nvme_attach_controller" 00:28:23.076 },{ 00:28:23.076 "params": { 00:28:23.076 "name": "Nvme4", 00:28:23.076 "trtype": "tcp", 00:28:23.076 "traddr": "10.0.0.2", 00:28:23.076 "adrfam": "ipv4", 00:28:23.076 "trsvcid": "4420", 00:28:23.076 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:23.076 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:23.076 "hdgst": false, 00:28:23.076 "ddgst": false 00:28:23.076 }, 00:28:23.076 "method": "bdev_nvme_attach_controller" 00:28:23.076 },{ 00:28:23.076 "params": { 00:28:23.076 "name": "Nvme5", 00:28:23.076 "trtype": "tcp", 00:28:23.076 "traddr": "10.0.0.2", 00:28:23.076 "adrfam": "ipv4", 00:28:23.076 "trsvcid": "4420", 00:28:23.076 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:23.076 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:23.076 "hdgst": false, 00:28:23.076 "ddgst": false 00:28:23.076 }, 00:28:23.076 "method": "bdev_nvme_attach_controller" 00:28:23.076 },{ 00:28:23.076 "params": { 00:28:23.076 "name": "Nvme6", 00:28:23.076 "trtype": "tcp", 00:28:23.076 "traddr": "10.0.0.2", 00:28:23.076 "adrfam": "ipv4", 00:28:23.076 "trsvcid": "4420", 00:28:23.076 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:23.076 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:23.076 "hdgst": false, 00:28:23.076 "ddgst": false 00:28:23.076 }, 00:28:23.076 "method": "bdev_nvme_attach_controller" 00:28:23.076 },{ 00:28:23.076 "params": { 00:28:23.076 "name": "Nvme7", 00:28:23.076 "trtype": "tcp", 00:28:23.076 "traddr": "10.0.0.2", 00:28:23.076 "adrfam": "ipv4", 00:28:23.076 "trsvcid": "4420", 00:28:23.076 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:23.076 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:23.076 "hdgst": false, 00:28:23.076 "ddgst": false 00:28:23.076 }, 00:28:23.076 "method": "bdev_nvme_attach_controller" 00:28:23.076 },{ 00:28:23.076 "params": { 00:28:23.076 "name": "Nvme8", 00:28:23.076 "trtype": "tcp", 00:28:23.076 "traddr": "10.0.0.2", 00:28:23.076 "adrfam": "ipv4", 00:28:23.076 "trsvcid": "4420", 00:28:23.076 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:23.076 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:23.076 "hdgst": false, 00:28:23.076 "ddgst": false 00:28:23.076 }, 00:28:23.076 "method": "bdev_nvme_attach_controller" 00:28:23.076 },{ 00:28:23.076 "params": { 00:28:23.076 "name": "Nvme9", 00:28:23.076 "trtype": "tcp", 00:28:23.076 "traddr": "10.0.0.2", 00:28:23.076 "adrfam": "ipv4", 00:28:23.076 "trsvcid": "4420", 00:28:23.076 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:23.076 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:23.076 "hdgst": false, 00:28:23.076 "ddgst": false 00:28:23.076 }, 00:28:23.076 "method": "bdev_nvme_attach_controller" 00:28:23.077 },{ 00:28:23.077 "params": { 00:28:23.077 "name": "Nvme10", 00:28:23.077 "trtype": "tcp", 00:28:23.077 "traddr": "10.0.0.2", 00:28:23.077 "adrfam": "ipv4", 00:28:23.077 "trsvcid": "4420", 00:28:23.077 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:23.077 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:23.077 "hdgst": false, 00:28:23.077 "ddgst": false 00:28:23.077 }, 00:28:23.077 "method": "bdev_nvme_attach_controller" 00:28:23.077 }' 00:28:23.077 [2024-07-26 16:33:42.579501] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:23.077 [2024-07-26 16:33:42.579650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid740844 ] 00:28:23.077 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.077 [2024-07-26 16:33:42.709542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.336 [2024-07-26 16:33:42.955026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.288 Running I/O for 1 seconds... 00:28:26.664 00:28:26.664 Latency(us) 00:28:26.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.664 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:26.664 Verification LBA range: start 0x0 length 0x400 00:28:26.664 Nvme1n1 : 1.21 212.29 13.27 0.00 0.00 298280.20 21068.61 312242.63 00:28:26.664 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:26.664 Verification LBA range: start 0x0 length 0x400 00:28:26.664 Nvme2n1 : 1.22 210.47 13.15 0.00 0.00 293917.58 28350.39 304475.40 00:28:26.664 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:26.664 Verification LBA range: start 0x0 length 0x400 00:28:26.664 Nvme3n1 : 1.08 182.91 11.43 0.00 0.00 322890.21 21554.06 307582.29 00:28:26.664 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:26.664 Verification LBA range: start 0x0 length 0x400 00:28:26.664 Nvme4n1 : 1.23 208.85 13.05 0.00 0.00 288328.44 26020.22 315349.52 00:28:26.664 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:26.664 Verification LBA range: start 0x0 length 0x400 00:28:26.664 Nvme5n1 : 1.09 175.38 10.96 0.00 0.00 334352.43 24272.59 316902.97 00:28:26.664 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:26.664 Verification LBA range: start 0x0 length 0x400 00:28:26.664 Nvme6n1 : 1.13 170.48 10.66 0.00 0.00 332161.83 26020.22 313796.08 00:28:26.664 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:26.664 Verification LBA range: start 0x0 length 0x400 00:28:26.664 Nvme7n1 : 1.23 208.03 13.00 0.00 0.00 272922.74 7184.69 292047.83 00:28:26.664 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:26.664 Verification LBA range: start 0x0 length 0x400 00:28:26.664 Nvme8n1 : 1.24 206.61 12.91 0.00 0.00 271471.69 23301.69 318456.41 00:28:26.664 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:26.664 Verification LBA range: start 0x0 length 0x400 00:28:26.664 Nvme9n1 : 1.16 165.14 10.32 0.00 0.00 330862.49 21845.33 327777.09 00:28:26.664 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:26.664 Verification LBA range: start 0x0 length 0x400 00:28:26.664 Nvme10n1 : 1.25 204.94 12.81 0.00 0.00 264787.25 21068.61 351078.78 00:28:26.664 =================================================================================================================== 00:28:26.664 Total : 1945.08 121.57 0.00 0.00 297832.82 7184.69 351078.78 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:27.603 rmmod nvme_tcp 00:28:27.603 rmmod nvme_fabrics 00:28:27.603 rmmod nvme_keyring 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 740101 ']' 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 740101 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 740101 ']' 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 740101 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 740101 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 740101' 00:28:27.603 killing process with pid 740101 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 740101 00:28:27.603 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 740101 00:28:30.913 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:30.913 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:30.913 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:30.913 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:30.913 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:30.913 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.913 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.913 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.817 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:32.817 00:28:32.817 real 0m17.730s 00:28:32.817 user 0m57.043s 00:28:32.817 sys 0m4.036s 00:28:32.817 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:32.818 ************************************ 00:28:32.818 END TEST nvmf_shutdown_tc1 00:28:32.818 ************************************ 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:32.818 ************************************ 00:28:32.818 START TEST nvmf_shutdown_tc2 00:28:32.818 ************************************ 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:32.818 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:32.818 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:32.818 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:32.818 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:32.818 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:32.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:28:32.819 00:28:32.819 --- 10.0.0.2 ping statistics --- 00:28:32.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.819 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:28:32.819 00:28:32.819 --- 10.0.0.1 ping statistics --- 00:28:32.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.819 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=742127 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 742127 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 742127 ']' 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:32.819 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.819 [2024-07-26 16:33:52.472939] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:32.819 [2024-07-26 16:33:52.473104] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.819 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.077 [2024-07-26 16:33:52.610958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:33.335 [2024-07-26 16:33:52.872216] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.335 [2024-07-26 16:33:52.872297] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.335 [2024-07-26 16:33:52.872324] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.335 [2024-07-26 16:33:52.872344] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.335 [2024-07-26 16:33:52.872366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.335 [2024-07-26 16:33:52.872516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.335 [2024-07-26 16:33:52.872618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.335 [2024-07-26 16:33:52.872660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.335 [2024-07-26 16:33:52.872670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.900 [2024-07-26 16:33:53.425891] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.900 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.900 Malloc1 00:28:33.900 [2024-07-26 16:33:53.552234] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.900 Malloc2 00:28:34.157 Malloc3 00:28:34.157 Malloc4 00:28:34.415 Malloc5 00:28:34.415 Malloc6 00:28:34.415 Malloc7 00:28:34.676 Malloc8 00:28:34.676 Malloc9 00:28:34.676 Malloc10 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=742440 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 742440 /var/tmp/bdevperf.sock 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 742440 ']' 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:34.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:34.935 { 00:28:34.935 "params": { 00:28:34.935 "name": "Nvme$subsystem", 00:28:34.935 "trtype": "$TEST_TRANSPORT", 00:28:34.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.935 "adrfam": "ipv4", 00:28:34.935 "trsvcid": "$NVMF_PORT", 00:28:34.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.935 "hdgst": ${hdgst:-false}, 00:28:34.935 "ddgst": ${ddgst:-false} 00:28:34.935 }, 00:28:34.935 "method": "bdev_nvme_attach_controller" 00:28:34.935 } 00:28:34.935 EOF 00:28:34.935 )") 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:34.935 { 00:28:34.935 "params": { 00:28:34.935 "name": "Nvme$subsystem", 00:28:34.935 "trtype": "$TEST_TRANSPORT", 00:28:34.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.935 "adrfam": "ipv4", 00:28:34.935 "trsvcid": "$NVMF_PORT", 00:28:34.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.935 "hdgst": ${hdgst:-false}, 00:28:34.935 "ddgst": ${ddgst:-false} 00:28:34.935 }, 00:28:34.935 "method": "bdev_nvme_attach_controller" 00:28:34.935 } 00:28:34.935 EOF 00:28:34.935 )") 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:34.935 { 00:28:34.935 "params": { 00:28:34.935 "name": "Nvme$subsystem", 00:28:34.935 "trtype": "$TEST_TRANSPORT", 00:28:34.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.935 "adrfam": "ipv4", 00:28:34.935 "trsvcid": "$NVMF_PORT", 00:28:34.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.935 "hdgst": ${hdgst:-false}, 00:28:34.935 "ddgst": ${ddgst:-false} 00:28:34.935 }, 00:28:34.935 "method": "bdev_nvme_attach_controller" 00:28:34.935 } 00:28:34.935 EOF 00:28:34.935 )") 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:34.935 { 00:28:34.935 "params": { 00:28:34.935 "name": "Nvme$subsystem", 00:28:34.935 "trtype": "$TEST_TRANSPORT", 00:28:34.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.935 "adrfam": "ipv4", 00:28:34.935 "trsvcid": "$NVMF_PORT", 00:28:34.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.935 "hdgst": ${hdgst:-false}, 00:28:34.935 "ddgst": ${ddgst:-false} 00:28:34.935 }, 00:28:34.935 "method": "bdev_nvme_attach_controller" 00:28:34.935 } 00:28:34.935 EOF 00:28:34.935 )") 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:34.935 { 00:28:34.935 "params": { 00:28:34.935 "name": "Nvme$subsystem", 00:28:34.935 "trtype": "$TEST_TRANSPORT", 00:28:34.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.935 "adrfam": "ipv4", 00:28:34.935 "trsvcid": "$NVMF_PORT", 00:28:34.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.935 "hdgst": ${hdgst:-false}, 00:28:34.935 "ddgst": ${ddgst:-false} 00:28:34.935 }, 00:28:34.935 "method": "bdev_nvme_attach_controller" 00:28:34.935 } 00:28:34.935 EOF 00:28:34.935 )") 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:34.935 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:34.935 { 00:28:34.935 "params": { 00:28:34.935 "name": "Nvme$subsystem", 00:28:34.935 "trtype": "$TEST_TRANSPORT", 00:28:34.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.935 "adrfam": "ipv4", 00:28:34.935 "trsvcid": "$NVMF_PORT", 00:28:34.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.936 "hdgst": ${hdgst:-false}, 00:28:34.936 "ddgst": ${ddgst:-false} 00:28:34.936 }, 00:28:34.936 "method": "bdev_nvme_attach_controller" 00:28:34.936 } 00:28:34.936 EOF 00:28:34.936 )") 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:34.936 { 00:28:34.936 "params": { 00:28:34.936 "name": "Nvme$subsystem", 00:28:34.936 "trtype": "$TEST_TRANSPORT", 00:28:34.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.936 "adrfam": "ipv4", 00:28:34.936 "trsvcid": "$NVMF_PORT", 00:28:34.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.936 "hdgst": ${hdgst:-false}, 00:28:34.936 "ddgst": ${ddgst:-false} 00:28:34.936 }, 00:28:34.936 "method": "bdev_nvme_attach_controller" 00:28:34.936 } 00:28:34.936 EOF 00:28:34.936 )") 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:34.936 { 00:28:34.936 "params": { 00:28:34.936 "name": "Nvme$subsystem", 00:28:34.936 "trtype": "$TEST_TRANSPORT", 00:28:34.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.936 "adrfam": "ipv4", 00:28:34.936 "trsvcid": "$NVMF_PORT", 00:28:34.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.936 "hdgst": ${hdgst:-false}, 00:28:34.936 "ddgst": ${ddgst:-false} 00:28:34.936 }, 00:28:34.936 "method": "bdev_nvme_attach_controller" 00:28:34.936 } 00:28:34.936 EOF 00:28:34.936 )") 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:34.936 { 00:28:34.936 "params": { 00:28:34.936 "name": "Nvme$subsystem", 00:28:34.936 "trtype": "$TEST_TRANSPORT", 00:28:34.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.936 "adrfam": "ipv4", 00:28:34.936 "trsvcid": "$NVMF_PORT", 00:28:34.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.936 "hdgst": ${hdgst:-false}, 00:28:34.936 "ddgst": ${ddgst:-false} 00:28:34.936 }, 00:28:34.936 "method": "bdev_nvme_attach_controller" 00:28:34.936 } 00:28:34.936 EOF 00:28:34.936 )") 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:34.936 { 00:28:34.936 "params": { 00:28:34.936 "name": "Nvme$subsystem", 00:28:34.936 "trtype": "$TEST_TRANSPORT", 00:28:34.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.936 "adrfam": "ipv4", 00:28:34.936 "trsvcid": "$NVMF_PORT", 00:28:34.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.936 "hdgst": ${hdgst:-false}, 00:28:34.936 "ddgst": ${ddgst:-false} 00:28:34.936 }, 00:28:34.936 "method": "bdev_nvme_attach_controller" 00:28:34.936 } 00:28:34.936 EOF 00:28:34.936 )") 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:34.936 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:34.936 "params": { 00:28:34.936 "name": "Nvme1", 00:28:34.936 "trtype": "tcp", 00:28:34.936 "traddr": "10.0.0.2", 00:28:34.936 "adrfam": "ipv4", 00:28:34.936 "trsvcid": "4420", 00:28:34.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:34.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:34.936 "hdgst": false, 00:28:34.936 "ddgst": false 00:28:34.936 }, 00:28:34.936 "method": "bdev_nvme_attach_controller" 00:28:34.936 },{ 00:28:34.936 "params": { 00:28:34.936 "name": "Nvme2", 00:28:34.936 "trtype": "tcp", 00:28:34.936 "traddr": "10.0.0.2", 00:28:34.936 "adrfam": "ipv4", 00:28:34.936 "trsvcid": "4420", 00:28:34.936 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:34.936 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:34.936 "hdgst": false, 00:28:34.936 "ddgst": false 00:28:34.936 }, 00:28:34.936 "method": "bdev_nvme_attach_controller" 00:28:34.936 },{ 00:28:34.936 "params": { 00:28:34.936 "name": "Nvme3", 00:28:34.936 "trtype": "tcp", 00:28:34.936 "traddr": "10.0.0.2", 00:28:34.936 "adrfam": "ipv4", 00:28:34.936 "trsvcid": "4420", 00:28:34.936 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:34.936 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:34.936 "hdgst": false, 00:28:34.936 "ddgst": false 00:28:34.936 }, 00:28:34.936 "method": "bdev_nvme_attach_controller" 00:28:34.936 },{ 00:28:34.936 "params": { 00:28:34.936 "name": "Nvme4", 00:28:34.936 "trtype": "tcp", 00:28:34.936 "traddr": "10.0.0.2", 00:28:34.936 "adrfam": "ipv4", 00:28:34.936 "trsvcid": "4420", 00:28:34.936 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:34.936 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:34.936 "hdgst": false, 00:28:34.936 "ddgst": false 00:28:34.936 }, 00:28:34.936 "method": "bdev_nvme_attach_controller" 00:28:34.936 },{ 00:28:34.936 "params": { 00:28:34.936 "name": "Nvme5", 00:28:34.936 "trtype": "tcp", 00:28:34.936 "traddr": "10.0.0.2", 00:28:34.936 "adrfam": "ipv4", 00:28:34.936 "trsvcid": "4420", 00:28:34.936 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:34.936 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:34.936 "hdgst": false, 00:28:34.936 "ddgst": false 00:28:34.936 }, 00:28:34.936 "method": "bdev_nvme_attach_controller" 00:28:34.936 },{ 00:28:34.936 "params": { 00:28:34.936 "name": "Nvme6", 00:28:34.936 "trtype": "tcp", 00:28:34.936 "traddr": "10.0.0.2", 00:28:34.936 "adrfam": "ipv4", 00:28:34.936 "trsvcid": "4420", 00:28:34.936 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:34.936 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:34.936 "hdgst": false, 00:28:34.936 "ddgst": false 00:28:34.936 }, 00:28:34.936 "method": "bdev_nvme_attach_controller" 00:28:34.936 },{ 00:28:34.936 "params": { 00:28:34.936 "name": "Nvme7", 00:28:34.936 "trtype": "tcp", 00:28:34.936 "traddr": "10.0.0.2", 00:28:34.936 "adrfam": "ipv4", 00:28:34.936 "trsvcid": "4420", 00:28:34.936 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:34.936 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:34.936 "hdgst": false, 00:28:34.936 "ddgst": false 00:28:34.936 }, 00:28:34.936 "method": "bdev_nvme_attach_controller" 00:28:34.936 },{ 00:28:34.936 "params": { 00:28:34.936 "name": "Nvme8", 00:28:34.936 "trtype": "tcp", 00:28:34.936 "traddr": "10.0.0.2", 00:28:34.936 "adrfam": "ipv4", 00:28:34.936 "trsvcid": "4420", 00:28:34.936 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:34.936 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:34.936 "hdgst": false, 00:28:34.936 "ddgst": false 00:28:34.936 }, 00:28:34.936 "method": "bdev_nvme_attach_controller" 00:28:34.936 },{ 00:28:34.936 "params": { 00:28:34.936 "name": "Nvme9", 00:28:34.936 "trtype": "tcp", 00:28:34.936 "traddr": "10.0.0.2", 00:28:34.936 "adrfam": "ipv4", 00:28:34.936 "trsvcid": "4420", 00:28:34.936 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:34.936 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:34.936 "hdgst": false, 00:28:34.936 "ddgst": false 00:28:34.936 }, 00:28:34.936 "method": "bdev_nvme_attach_controller" 00:28:34.936 },{ 00:28:34.936 "params": { 00:28:34.936 "name": "Nvme10", 00:28:34.936 "trtype": "tcp", 00:28:34.936 "traddr": "10.0.0.2", 00:28:34.936 "adrfam": "ipv4", 00:28:34.936 "trsvcid": "4420", 00:28:34.936 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:34.936 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:34.936 "hdgst": false, 00:28:34.936 "ddgst": false 00:28:34.936 }, 00:28:34.936 "method": "bdev_nvme_attach_controller" 00:28:34.936 }' 00:28:34.936 [2024-07-26 16:33:54.558931] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:34.937 [2024-07-26 16:33:54.559132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid742440 ] 00:28:34.937 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.937 [2024-07-26 16:33:54.683742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.196 [2024-07-26 16:33:54.929647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.724 Running I/O for 10 seconds... 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:37.724 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:37.982 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:37.982 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:37.982 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:37.982 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:37.982 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.982 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.982 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.982 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=72 00:28:37.982 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 72 -ge 100 ']' 00:28:37.982 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=136 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 742440 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 742440 ']' 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 742440 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 742440 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 742440' 00:28:38.239 killing process with pid 742440 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 742440 00:28:38.239 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 742440 00:28:38.499 Received shutdown signal, test time was about 1.014820 seconds 00:28:38.499 00:28:38.499 Latency(us) 00:28:38.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.499 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.499 Verification LBA range: start 0x0 length 0x400 00:28:38.499 Nvme1n1 : 0.99 198.70 12.42 0.00 0.00 317645.62 4708.88 302921.96 00:28:38.499 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.499 Verification LBA range: start 0x0 length 0x400 00:28:38.499 Nvme2n1 : 0.97 211.13 13.20 0.00 0.00 287042.33 11116.85 296708.17 00:28:38.499 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.499 Verification LBA range: start 0x0 length 0x400 00:28:38.499 Nvme3n1 : 0.94 205.03 12.81 0.00 0.00 294761.05 22039.51 292047.83 00:28:38.499 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.499 Verification LBA range: start 0x0 length 0x400 00:28:38.499 Nvme4n1 : 0.96 204.98 12.81 0.00 0.00 287029.48 3616.62 298261.62 00:28:38.499 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.499 Verification LBA range: start 0x0 length 0x400 00:28:38.499 Nvme5n1 : 1.00 191.62 11.98 0.00 0.00 302671.14 29515.47 310689.19 00:28:38.499 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.499 Verification LBA range: start 0x0 length 0x400 00:28:38.499 Nvme6n1 : 0.98 196.13 12.26 0.00 0.00 288958.14 45826.65 276513.37 00:28:38.499 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.499 Verification LBA range: start 0x0 length 0x400 00:28:38.499 Nvme7n1 : 0.96 199.12 12.44 0.00 0.00 277773.53 21748.24 299815.06 00:28:38.499 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.499 Verification LBA range: start 0x0 length 0x400 00:28:38.499 Nvme8n1 : 0.98 195.19 12.20 0.00 0.00 277419.55 25049.32 304475.40 00:28:38.499 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.499 Verification LBA range: start 0x0 length 0x400 00:28:38.499 Nvme9n1 : 1.01 189.35 11.83 0.00 0.00 281224.79 31845.64 363506.35 00:28:38.499 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.499 Verification LBA range: start 0x0 length 0x400 00:28:38.499 Nvme10n1 : 1.00 191.42 11.96 0.00 0.00 270474.05 23107.51 309135.74 00:28:38.499 =================================================================================================================== 00:28:38.499 Total : 1982.68 123.92 0.00 0.00 288563.00 3616.62 363506.35 00:28:39.438 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:40.372 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 742127 00:28:40.372 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:40.372 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:40.372 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:40.372 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:40.372 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:40.372 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:40.372 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:40.372 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:40.372 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:40.372 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:40.372 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:40.372 rmmod nvme_tcp 00:28:40.632 rmmod nvme_fabrics 00:28:40.632 rmmod nvme_keyring 00:28:40.632 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:40.632 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:40.632 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:40.632 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 742127 ']' 00:28:40.632 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 742127 00:28:40.632 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 742127 ']' 00:28:40.632 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 742127 00:28:40.632 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:40.632 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:40.632 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 742127 00:28:40.632 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:40.632 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:40.632 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 742127' 00:28:40.632 killing process with pid 742127 00:28:40.632 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 742127 00:28:40.632 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 742127 00:28:43.955 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:43.955 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:43.955 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:43.955 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:43.955 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:43.955 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.955 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.955 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:45.861 00:28:45.861 real 0m12.998s 00:28:45.861 user 0m43.441s 00:28:45.861 sys 0m2.041s 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.861 ************************************ 00:28:45.861 END TEST nvmf_shutdown_tc2 00:28:45.861 ************************************ 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:45.861 ************************************ 00:28:45.861 START TEST nvmf_shutdown_tc3 00:28:45.861 ************************************ 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:45.861 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:45.862 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:45.862 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:45.862 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:45.862 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:45.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:28:45.862 00:28:45.862 --- 10.0.0.2 ping statistics --- 00:28:45.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.862 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:28:45.862 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:28:45.862 00:28:45.862 --- 10.0.0.1 ping statistics --- 00:28:45.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.862 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=743761 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 743761 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 743761 ']' 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:45.863 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.863 [2024-07-26 16:34:05.517250] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:45.863 [2024-07-26 16:34:05.517416] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.863 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.120 [2024-07-26 16:34:05.658417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:46.378 [2024-07-26 16:34:05.917239] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.378 [2024-07-26 16:34:05.917310] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.378 [2024-07-26 16:34:05.917340] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.378 [2024-07-26 16:34:05.917361] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.378 [2024-07-26 16:34:05.917382] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.378 [2024-07-26 16:34:05.917517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.378 [2024-07-26 16:34:05.917619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.378 [2024-07-26 16:34:05.917674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.378 [2024-07-26 16:34:05.917684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.943 [2024-07-26 16:34:06.489153] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.943 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.943 Malloc1 00:28:46.943 [2024-07-26 16:34:06.635256] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.203 Malloc2 00:28:47.203 Malloc3 00:28:47.203 Malloc4 00:28:47.461 Malloc5 00:28:47.461 Malloc6 00:28:47.461 Malloc7 00:28:47.718 Malloc8 00:28:47.718 Malloc9 00:28:47.976 Malloc10 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=744073 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 744073 /var/tmp/bdevperf.sock 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 744073 ']' 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:47.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:47.976 { 00:28:47.976 "params": { 00:28:47.976 "name": "Nvme$subsystem", 00:28:47.976 "trtype": "$TEST_TRANSPORT", 00:28:47.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.976 "adrfam": "ipv4", 00:28:47.976 "trsvcid": "$NVMF_PORT", 00:28:47.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.976 "hdgst": ${hdgst:-false}, 00:28:47.976 "ddgst": ${ddgst:-false} 00:28:47.976 }, 00:28:47.976 "method": "bdev_nvme_attach_controller" 00:28:47.976 } 00:28:47.976 EOF 00:28:47.976 )") 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:47.976 { 00:28:47.976 "params": { 00:28:47.976 "name": "Nvme$subsystem", 00:28:47.976 "trtype": "$TEST_TRANSPORT", 00:28:47.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.976 "adrfam": "ipv4", 00:28:47.976 "trsvcid": "$NVMF_PORT", 00:28:47.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.976 "hdgst": ${hdgst:-false}, 00:28:47.976 "ddgst": ${ddgst:-false} 00:28:47.976 }, 00:28:47.976 "method": "bdev_nvme_attach_controller" 00:28:47.976 } 00:28:47.976 EOF 00:28:47.976 )") 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:47.976 { 00:28:47.976 "params": { 00:28:47.976 "name": "Nvme$subsystem", 00:28:47.976 "trtype": "$TEST_TRANSPORT", 00:28:47.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.976 "adrfam": "ipv4", 00:28:47.976 "trsvcid": "$NVMF_PORT", 00:28:47.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.976 "hdgst": ${hdgst:-false}, 00:28:47.976 "ddgst": ${ddgst:-false} 00:28:47.976 }, 00:28:47.976 "method": "bdev_nvme_attach_controller" 00:28:47.976 } 00:28:47.976 EOF 00:28:47.976 )") 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:47.976 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:47.976 { 00:28:47.976 "params": { 00:28:47.976 "name": "Nvme$subsystem", 00:28:47.976 "trtype": "$TEST_TRANSPORT", 00:28:47.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "$NVMF_PORT", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.977 "hdgst": ${hdgst:-false}, 00:28:47.977 "ddgst": ${ddgst:-false} 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 } 00:28:47.977 EOF 00:28:47.977 )") 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:47.977 { 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme$subsystem", 00:28:47.977 "trtype": "$TEST_TRANSPORT", 00:28:47.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "$NVMF_PORT", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.977 "hdgst": ${hdgst:-false}, 00:28:47.977 "ddgst": ${ddgst:-false} 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 } 00:28:47.977 EOF 00:28:47.977 )") 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:47.977 { 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme$subsystem", 00:28:47.977 "trtype": "$TEST_TRANSPORT", 00:28:47.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "$NVMF_PORT", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.977 "hdgst": ${hdgst:-false}, 00:28:47.977 "ddgst": ${ddgst:-false} 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 } 00:28:47.977 EOF 00:28:47.977 )") 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:47.977 { 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme$subsystem", 00:28:47.977 "trtype": "$TEST_TRANSPORT", 00:28:47.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "$NVMF_PORT", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.977 "hdgst": ${hdgst:-false}, 00:28:47.977 "ddgst": ${ddgst:-false} 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 } 00:28:47.977 EOF 00:28:47.977 )") 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:47.977 { 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme$subsystem", 00:28:47.977 "trtype": "$TEST_TRANSPORT", 00:28:47.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "$NVMF_PORT", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.977 "hdgst": ${hdgst:-false}, 00:28:47.977 "ddgst": ${ddgst:-false} 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 } 00:28:47.977 EOF 00:28:47.977 )") 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:47.977 { 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme$subsystem", 00:28:47.977 "trtype": "$TEST_TRANSPORT", 00:28:47.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "$NVMF_PORT", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.977 "hdgst": ${hdgst:-false}, 00:28:47.977 "ddgst": ${ddgst:-false} 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 } 00:28:47.977 EOF 00:28:47.977 )") 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:47.977 { 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme$subsystem", 00:28:47.977 "trtype": "$TEST_TRANSPORT", 00:28:47.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "$NVMF_PORT", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.977 "hdgst": ${hdgst:-false}, 00:28:47.977 "ddgst": ${ddgst:-false} 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 } 00:28:47.977 EOF 00:28:47.977 )") 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:47.977 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme1", 00:28:47.977 "trtype": "tcp", 00:28:47.977 "traddr": "10.0.0.2", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "4420", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:47.977 "hdgst": false, 00:28:47.977 "ddgst": false 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 },{ 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme2", 00:28:47.977 "trtype": "tcp", 00:28:47.977 "traddr": "10.0.0.2", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "4420", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:47.977 "hdgst": false, 00:28:47.977 "ddgst": false 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 },{ 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme3", 00:28:47.977 "trtype": "tcp", 00:28:47.977 "traddr": "10.0.0.2", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "4420", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:47.977 "hdgst": false, 00:28:47.977 "ddgst": false 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 },{ 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme4", 00:28:47.977 "trtype": "tcp", 00:28:47.977 "traddr": "10.0.0.2", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "4420", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:47.977 "hdgst": false, 00:28:47.977 "ddgst": false 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 },{ 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme5", 00:28:47.977 "trtype": "tcp", 00:28:47.977 "traddr": "10.0.0.2", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "4420", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:47.977 "hdgst": false, 00:28:47.977 "ddgst": false 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 },{ 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme6", 00:28:47.977 "trtype": "tcp", 00:28:47.977 "traddr": "10.0.0.2", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "4420", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:47.977 "hdgst": false, 00:28:47.977 "ddgst": false 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 },{ 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme7", 00:28:47.977 "trtype": "tcp", 00:28:47.977 "traddr": "10.0.0.2", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "4420", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:47.977 "hdgst": false, 00:28:47.977 "ddgst": false 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 },{ 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme8", 00:28:47.977 "trtype": "tcp", 00:28:47.977 "traddr": "10.0.0.2", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "4420", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:47.977 "hdgst": false, 00:28:47.977 "ddgst": false 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 },{ 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme9", 00:28:47.977 "trtype": "tcp", 00:28:47.977 "traddr": "10.0.0.2", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "4420", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:47.977 "hdgst": false, 00:28:47.977 "ddgst": false 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 },{ 00:28:47.977 "params": { 00:28:47.977 "name": "Nvme10", 00:28:47.977 "trtype": "tcp", 00:28:47.977 "traddr": "10.0.0.2", 00:28:47.977 "adrfam": "ipv4", 00:28:47.977 "trsvcid": "4420", 00:28:47.977 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:47.977 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:47.977 "hdgst": false, 00:28:47.977 "ddgst": false 00:28:47.977 }, 00:28:47.977 "method": "bdev_nvme_attach_controller" 00:28:47.977 }' 00:28:47.977 [2024-07-26 16:34:07.637429] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:47.977 [2024-07-26 16:34:07.637583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid744073 ] 00:28:47.977 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.234 [2024-07-26 16:34:07.766987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.494 [2024-07-26 16:34:08.011047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.394 Running I/O for 10 seconds... 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:50.653 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:50.912 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:50.912 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:50.912 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:50.912 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:50.912 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.912 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.912 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.912 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:50.912 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:50.912 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:51.170 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:51.170 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:51.170 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:51.170 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:51.170 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.170 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 743761 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 743761 ']' 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 743761 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 743761 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 743761' 00:28:51.445 killing process with pid 743761 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 743761 00:28:51.445 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 743761 00:28:51.445 [2024-07-26 16:34:10.998130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.998991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.445 [2024-07-26 16:34:10.999008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:10.999443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.002127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.002168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.002191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.002211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.004926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.004960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.004981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.004999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.446 [2024-07-26 16:34:11.005971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.005988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.006005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.006022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.006039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.006056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.006097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.006116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.006138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.009654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.009697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.009718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.009737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.009755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.011968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.012996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.013013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.013031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.013047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.013094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.013121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.013139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.013156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.013174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.013192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.013209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.013227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.014948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.014983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.015004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.447 [2024-07-26 16:34:11.015022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.015988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.016010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.016036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.016065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.016089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.016116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.016133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.016151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.016167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:28:51.448 [2024-07-26 16:34:11.016229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.448 [2024-07-26 16:34:11.016287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.448 [2024-07-26 16:34:11.016348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.448 [2024-07-26 16:34:11.016374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.448 [2024-07-26 16:34:11.016401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.448 [2024-07-26 16:34:11.016424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.448 [2024-07-26 16:34:11.016448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.448 [2024-07-26 16:34:11.016471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.448 [2024-07-26 16:34:11.016495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.448 [2024-07-26 16:34:11.016516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.448 [2024-07-26 16:34:11.016540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.448 [2024-07-26 16:34:11.016562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.448 [2024-07-26 16:34:11.016586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.448 [2024-07-26 16:34:11.016608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.448 [2024-07-26 16:34:11.016632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.448 [2024-07-26 16:34:11.016654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.448 [2024-07-26 16:34:11.016678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.448 [2024-07-26 16:34:11.016699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.448 [2024-07-26 16:34:11.016723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.448 [2024-07-26 16:34:11.016745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.448 [2024-07-26 16:34:11.016769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.016790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.016814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.016835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.016860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.016888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.016913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.016951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.016976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.016998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.017955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.017976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.018000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.018021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.018044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.018087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.018123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.018149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.018174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.018196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.018220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.018242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.018266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.018288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.449 [2024-07-26 16:34:11.018312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.449 [2024-07-26 16:34:11.018339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.018363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.018400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.018424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.018444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.018467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.018488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.018511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.018532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.018555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.018576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.018599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.018620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.018643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.018665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.018666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.018689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.018701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.018714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.018722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.018739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:1[2024-07-26 16:34:11.018741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.018762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-07-26 16:34:11.018763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:51.450 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.018783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.018789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.018802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.018811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.018820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.018835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:1[2024-07-26 16:34:11.018838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.018857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-07-26 16:34:11.018858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:51.450 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.018876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.018882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.018895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.018905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.018912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.018930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-07-26 16:34:11.018929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:1with the state(5) to be set 00:28:51.450 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.018950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.018952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.018968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.018980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.018987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 16:34:11.019004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.019042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.019086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.019115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 16:34:11.019135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.019173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.019191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-07-26 16:34:11.019208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128with the state(5) to be set 00:28:51.450 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.019230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.019248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.019266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.019289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.019311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.019340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.450 [2024-07-26 16:34:11.019377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.450 [2024-07-26 16:34:11.019412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.450 [2024-07-26 16:34:11.019422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.451 [2024-07-26 16:34:11.019429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 16:34:11.019446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.451 with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.451 [2024-07-26 16:34:11.019535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.019956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023350] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9d00 was disconnected and freed. reset controller. 00:28:51.451 [2024-07-26 16:34:11.023370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.451 [2024-07-26 16:34:11.023571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.451 [2024-07-26 16:34:11.023594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.451 [2024-07-26 16:34:11.023634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-26 16:34:11.023627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.451 with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.451 [2024-07-26 16:34:11.023662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.451 [2024-07-26 16:34:11.023694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.451 [2024-07-26 16:34:11.023714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.451 [2024-07-26 16:34:11.023733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.451 [2024-07-26 16:34:11.023827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.451 [2024-07-26 16:34:11.023846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.451 [2024-07-26 16:34:11.023865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.451 [2024-07-26 16:34:11.023883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-26 16:34:11.023900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nswith the state(5) to be set 00:28:51.451 id:0 cdw10:00000000 cdw11:00000000 00:28:51.451 [2024-07-26 16:34:11.023921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.451 [2024-07-26 16:34:11.023940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.451 [2024-07-26 16:34:11.023946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.451 [2024-07-26 16:34:11.023959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.023967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.023977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.023986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.023994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.024073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.024104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.024130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.024150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.024168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.024186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-26 16:34:11.024204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same id:0 cdw10:00000000 cdw11:00000000 00:28:51.452 with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.024227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is [2024-07-26 16:34:11.024246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same same with the state(5) to be set 00:28:51.452 with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-26 16:34:11.024326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same id:0 cdw10:00000000 cdw11:00000000 00:28:51.452 with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.024366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.024385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.024403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.024446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.024465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.024482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.024501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is [2024-07-26 16:34:11.024519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same same with the state(5) to be set 00:28:51.452 with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.024616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.024639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.024660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.024682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.024703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.024725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.024745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.024764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.024828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.024857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.024879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.024900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.024921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.024941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.024968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.024991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.025010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.025086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.025124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.025147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.025168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.025189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.025209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.025230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.025251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.025270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:28:51.452 [2024-07-26 16:34:11.025334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.025372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.025410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.025447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.452 [2024-07-26 16:34:11.025482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.452 [2024-07-26 16:34:11.025506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.025527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.453 [2024-07-26 16:34:11.025547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.025567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.025880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.025915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.025949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.025974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-26 16:34:11.026918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1with the state(5) to be set 00:28:51.453 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.026944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.026964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.026967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.026982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.026989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.027000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.027014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:1[2024-07-26 16:34:11.027017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.027036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-26 16:34:11.027037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:51.453 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.027055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.027069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.027109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.027124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.027128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.027146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.027150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.027164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.027173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.027182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.027197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1[2024-07-26 16:34:11.027200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.027221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-26 16:34:11.027221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:51.453 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.027240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.027248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.027259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.027270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.027277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.027295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-26 16:34:11.027295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1with the state(5) to be set 00:28:51.453 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.453 [2024-07-26 16:34:11.027316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.453 [2024-07-26 16:34:11.027319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.453 [2024-07-26 16:34:11.027334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1[2024-07-26 16:34:11.027357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.454 with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-26 16:34:11.027393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:51.454 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.454 [2024-07-26 16:34:11.027418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.454 [2024-07-26 16:34:11.027438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.454 [2024-07-26 16:34:11.027457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1[2024-07-26 16:34:11.027474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.454 with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.454 [2024-07-26 16:34:11.027511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.454 [2024-07-26 16:34:11.027529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.454 [2024-07-26 16:34:11.027548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.454 [2024-07-26 16:34:11.027582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.454 [2024-07-26 16:34:11.027600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.454 [2024-07-26 16:34:11.027618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 16:34:11.027636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.454 with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.454 [2024-07-26 16:34:11.027673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.454 [2024-07-26 16:34:11.027691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.454 [2024-07-26 16:34:11.027727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.454 [2024-07-26 16:34:11.027744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1[2024-07-26 16:34:11.027761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.454 with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-26 16:34:11.027783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:51.454 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.454 [2024-07-26 16:34:11.027801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.454 [2024-07-26 16:34:11.027818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.454 [2024-07-26 16:34:11.027836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.454 [2024-07-26 16:34:11.027871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.454 [2024-07-26 16:34:11.027889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.454 [2024-07-26 16:34:11.027909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.454 [2024-07-26 16:34:11.027928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1[2024-07-26 16:34:11.027949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.454 with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-26 16:34:11.027970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:51.454 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.454 [2024-07-26 16:34:11.027990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.027996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.454 [2024-07-26 16:34:11.028008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.028018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.454 [2024-07-26 16:34:11.028026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.454 [2024-07-26 16:34:11.028042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1[2024-07-26 16:34:11.028044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.454 with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.028087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-26 16:34:11.028087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:51.455 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.028119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.028149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.028142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.028183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.028963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.028986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.029011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.029033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.029057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.029103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.029130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.455 [2024-07-26 16:34:11.029154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.455 [2024-07-26 16:34:11.029477] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f8b80 was disconnected and freed. reset controller. 00:28:51.455 [2024-07-26 16:34:11.029604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.029987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.030004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.030022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.030039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.030056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.030086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.030105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.030123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.030140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.030158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.030176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.455 [2024-07-26 16:34:11.030194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.030758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.032998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:51.456 [2024-07-26 16:34:11.033048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:51.456 [2024-07-26 16:34:11.033110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:28:51.456 [2024-07-26 16:34:11.033149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:28:51.456 [2024-07-26 16:34:11.034370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.456 [2024-07-26 16:34:11.034404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.456 [2024-07-26 16:34:11.034436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.456 [2024-07-26 16:34:11.034459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.456 [2024-07-26 16:34:11.034482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.456 [2024-07-26 16:34:11.034502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.456 [2024-07-26 16:34:11.034524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.456 [2024-07-26 16:34:11.034546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.456 [2024-07-26 16:34:11.034566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.034622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:28:51.456 [2024-07-26 16:34:11.034671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:28:51.456 [2024-07-26 16:34:11.034717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:28:51.456 [2024-07-26 16:34:11.034780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:28:51.456 [2024-07-26 16:34:11.034823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:28:51.456 [2024-07-26 16:34:11.034871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:28:51.456 [2024-07-26 16:34:11.034958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.456 [2024-07-26 16:34:11.034987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.456 [2024-07-26 16:34:11.035010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.456 [2024-07-26 16:34:11.035032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.456 [2024-07-26 16:34:11.035054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.456 [2024-07-26 16:34:11.035084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.456 [2024-07-26 16:34:11.035107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.456 [2024-07-26 16:34:11.035128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.456 [2024-07-26 16:34:11.035148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.035909] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:51.456 [2024-07-26 16:34:11.036631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.456 [2024-07-26 16:34:11.036672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:28:51.456 [2024-07-26 16:34:11.036698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.036859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.456 [2024-07-26 16:34:11.036893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=4420 00:28:51.456 [2024-07-26 16:34:11.036916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:28:51.456 [2024-07-26 16:34:11.037017] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:51.456 [2024-07-26 16:34:11.037126] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:51.456 [2024-07-26 16:34:11.037274] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:51.456 [2024-07-26 16:34:11.037364] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:51.456 [2024-07-26 16:34:11.037451] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:51.456 [2024-07-26 16:34:11.037536] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:51.456 [2024-07-26 16:34:11.037672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:28:51.456 [2024-07-26 16:34:11.037711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:28:51.456 [2024-07-26 16:34:11.037923] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:51.456 [2024-07-26 16:34:11.037968] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:51.456 [2024-07-26 16:34:11.037994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:51.456 [2024-07-26 16:34:11.038022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:51.456 [2024-07-26 16:34:11.038067] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:51.456 [2024-07-26 16:34:11.038092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:51.456 [2024-07-26 16:34:11.038113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:51.456 [2024-07-26 16:34:11.038260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.456 [2024-07-26 16:34:11.038291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.456 [2024-07-26 16:34:11.044414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:28:51.456 [2024-07-26 16:34:11.044627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:28:51.457 [2024-07-26 16:34:11.044974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.045966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.045991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.457 [2024-07-26 16:34:11.046939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.457 [2024-07-26 16:34:11.046965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.046987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.047964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.047988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.048011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.048035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.048063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.048091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.048123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.048147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.048170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.048193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8680 is same with the state(5) to be set 00:28:51.458 [2024-07-26 16:34:11.049898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.049932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.049970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.049994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.050030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.050055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.050089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.050112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.050138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.050160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.050185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.050207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.050232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.050254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.050279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.050301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.050326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.458 [2024-07-26 16:34:11.050348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.458 [2024-07-26 16:34:11.050373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.050396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.050421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.050443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.050467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.050490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.050515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.050537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.050562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.050584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.050610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.050636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.050664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.050686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.050731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.050754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.050779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.050801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.050826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.050848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.050872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.050895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.050919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.050941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.050965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.050987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.051012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.051034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.065780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.065821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.065849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.065871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.065896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.065917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.065941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.065962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.065987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.066960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.459 [2024-07-26 16:34:11.066981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.459 [2024-07-26 16:34:11.067005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.067770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.067792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8900 is same with the state(5) to be set 00:28:51.460 [2024-07-26 16:34:11.069435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.069473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.069505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.069530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.069555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.069577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.069601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.069622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.069646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.069668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.069692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.069714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.069738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.069759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.069783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.069804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.069829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.069850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.069874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.069896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.069920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.069942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.069966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.069987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.070011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.070033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.070092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.070128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.070153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.070176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.070201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.070283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.070311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.070333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.070373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.070395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.070419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.070441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.070464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.070485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.070509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.070531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.070554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.070576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.460 [2024-07-26 16:34:11.070600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.460 [2024-07-26 16:34:11.070621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.070645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.070666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.070690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.070712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.070735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.070762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.070787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.070809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.070832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.070853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.070877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.070898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.070922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.070943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.070967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.070988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.071973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.071995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.072018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.072054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.072089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.072111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.072136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.072158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.072182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.072204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.072229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.072251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.072276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.072298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.072322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.072360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.072385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.072406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.072430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.072451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.072475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.072496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.461 [2024-07-26 16:34:11.072520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.461 [2024-07-26 16:34:11.072543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.072570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.072593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.072614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8e00 is same with the state(5) to be set 00:28:51.462 [2024-07-26 16:34:11.074186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.074259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.074313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.074360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.074423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.074467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.074512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.074557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.074602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.074646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.074691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.074740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.074786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.074842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.074888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.074962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.074984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.075962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.075983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.076012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.076034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.462 [2024-07-26 16:34:11.076085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.462 [2024-07-26 16:34:11.076109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.076957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.076981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.077003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.077028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.077072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.077098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.077120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.077145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.077167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.077191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.077213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.077243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.077267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.077292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.077314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.077339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.077377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.077398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9080 is same with the state(5) to be set 00:28:51.463 [2024-07-26 16:34:11.078947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.078978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.079009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.079033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.079084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.079107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.079132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.079154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.079178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.079200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.079225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.079247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.463 [2024-07-26 16:34:11.079271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.463 [2024-07-26 16:34:11.079292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.079318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.079356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.079381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.079402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.079432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.079455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.079479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.079500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.079523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.079544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.079568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.079589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.079614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.079636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.079660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.079692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.079718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.079740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.079764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.079785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.079808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.079829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.079853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.079874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.079898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.079919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.079943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.079964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.079987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.080977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.080999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.081024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.081068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.081097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.081119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.081144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.081167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.081194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.464 [2024-07-26 16:34:11.081217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.464 [2024-07-26 16:34:11.081243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.081296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.081361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.081408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.081454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.081499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.081544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.081590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.081637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.081682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.081730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.081775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.081822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.081867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.081918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.081964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.081986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.082009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.082030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.082078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.082109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.082131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9300 is same with the state(5) to be set 00:28:51.465 [2024-07-26 16:34:11.083733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.083764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.083794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.083816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.083840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.083862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.083886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.083907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.083930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.083951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.083975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.083996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.084041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.084119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.084167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.084212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.084259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.084306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.084353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.084414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.084473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.084518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.084561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.084605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.084649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.084692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.465 [2024-07-26 16:34:11.084742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.465 [2024-07-26 16:34:11.084766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.084787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.084810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.084831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.084854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.084876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.084900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.084921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.084944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.084964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.084988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.085976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.085999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.086019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.086064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.086089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.086113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.086135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.086159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.086180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.086204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.086226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.086251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.086273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.086296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.086318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.086357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.086379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.086402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.086423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.086446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.086467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.086491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.086516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.086540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.086561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.086585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.086606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.466 [2024-07-26 16:34:11.086630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.466 [2024-07-26 16:34:11.086651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.086675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.086695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.086718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.086739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.086763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.086784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.086805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9580 is same with the state(5) to be set 00:28:51.467 [2024-07-26 16:34:11.088466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:51.467 [2024-07-26 16:34:11.088507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:51.467 [2024-07-26 16:34:11.088534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.467 [2024-07-26 16:34:11.088558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:51.467 [2024-07-26 16:34:11.088714] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.467 [2024-07-26 16:34:11.088756] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.467 [2024-07-26 16:34:11.088789] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.467 [2024-07-26 16:34:11.088819] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.467 [2024-07-26 16:34:11.088990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:51.467 [2024-07-26 16:34:11.089023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:51.467 [2024-07-26 16:34:11.089100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:51.467 [2024-07-26 16:34:11.089132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:51.467 [2024-07-26 16:34:11.089483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.467 [2024-07-26 16:34:11.089521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=4420 00:28:51.467 [2024-07-26 16:34:11.089552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:28:51.467 [2024-07-26 16:34:11.089733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.467 [2024-07-26 16:34:11.089765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:28:51.467 [2024-07-26 16:34:11.089788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:28:51.467 [2024-07-26 16:34:11.089973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.467 [2024-07-26 16:34:11.090006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:28:51.467 [2024-07-26 16:34:11.090028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:28:51.467 [2024-07-26 16:34:11.090220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.467 [2024-07-26 16:34:11.090253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2c80 with addr=10.0.0.2, port=4420 00:28:51.467 [2024-07-26 16:34:11.090276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:28:51.467 [2024-07-26 16:34:11.093348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.467 [2024-07-26 16:34:11.093390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3b80 with addr=10.0.0.2, port=4420 00:28:51.467 [2024-07-26 16:34:11.093414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:28:51.467 [2024-07-26 16:34:11.093617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.467 [2024-07-26 16:34:11.093652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:28:51.467 [2024-07-26 16:34:11.093675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:28:51.467 [2024-07-26 16:34:11.093850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.467 [2024-07-26 16:34:11.093884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4a80 with addr=10.0.0.2, port=4420 00:28:51.467 [2024-07-26 16:34:11.093906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:28:51.467 [2024-07-26 16:34:11.094129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.467 [2024-07-26 16:34:11.094163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5200 with addr=10.0.0.2, port=4420 00:28:51.467 [2024-07-26 16:34:11.094185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:28:51.467 [2024-07-26 16:34:11.094218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:28:51.467 [2024-07-26 16:34:11.094249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:28:51.467 [2024-07-26 16:34:11.094277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:28:51.467 [2024-07-26 16:34:11.094304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:28:51.467 [2024-07-26 16:34:11.094547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.094579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.094616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.094641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.094666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.094688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.094712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.094734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.094757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.094779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.094802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.094822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.094845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.094866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.094890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.094911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.094935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.094956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.094980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.095001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.095024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.095045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.095102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.095127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.095151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.467 [2024-07-26 16:34:11.095173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.467 [2024-07-26 16:34:11.095197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.095958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.095980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.096957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.096978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.097006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.468 [2024-07-26 16:34:11.097028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.468 [2024-07-26 16:34:11.097051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.097105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.097131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.097153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.097177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.097198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.097222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.097245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.097268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.097289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.097313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.097354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.097396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.097418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.097441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.097462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.097485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.097506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.097529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.097550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.097573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.097594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.097617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.097642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.097664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9800 is same with the state(5) to be set 00:28:51.469 [2024-07-26 16:34:11.099279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.099311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.099351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.099374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.099398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.099436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.099460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.099482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.099505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.099527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.099551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.099572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.099596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.099618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.099641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.099663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.099686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.099708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.099731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.099752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.099775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.099796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.099820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.099846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.099872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.099893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.099917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.099950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.099973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.099995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.100018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.100055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.100090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.100113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.100137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.100160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.100184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.100206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.100230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.100252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.100276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.100298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.100327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.100349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.100388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.100409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.100433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.100453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.100477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.100502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.100527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.100549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.100572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.469 [2024-07-26 16:34:11.100593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.469 [2024-07-26 16:34:11.100616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.100637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.100661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.100682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.100706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.100727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.100751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.100772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.100795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.100816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.100840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.100860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.100884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.100905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.100929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.100950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.100973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.100994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.101975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.101999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.102021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.102066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.102090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.102116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.102138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.102162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.102184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.102208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.102230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.102254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.102277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.102305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.102329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.102354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.470 [2024-07-26 16:34:11.102376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.470 [2024-07-26 16:34:11.102398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9a80 is same with the state(5) to be set 00:28:51.470 [2024-07-26 16:34:11.107670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:51.470 task offset: 17408 on job bdev=Nvme10n1 fails 00:28:51.470 00:28:51.470 Latency(us) 00:28:51.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.470 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.470 Job: Nvme1n1 ended in about 0.98 seconds with error 00:28:51.470 Verification LBA range: start 0x0 length 0x400 00:28:51.470 Nvme1n1 : 0.98 130.10 8.13 65.05 0.00 324314.20 25243.50 299815.06 00:28:51.471 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.471 Job: Nvme2n1 ended in about 1.00 seconds with error 00:28:51.471 Verification LBA range: start 0x0 length 0x400 00:28:51.471 Nvme2n1 : 1.00 127.56 7.97 63.78 0.00 324112.94 24078.41 274959.93 00:28:51.471 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.471 Job: Nvme3n1 ended in about 0.97 seconds with error 00:28:51.471 Verification LBA range: start 0x0 length 0x400 00:28:51.471 Nvme3n1 : 0.97 198.57 12.41 66.19 0.00 228825.60 9903.22 307582.29 00:28:51.471 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.471 Job: Nvme4n1 ended in about 1.01 seconds with error 00:28:51.471 Verification LBA range: start 0x0 length 0x400 00:28:51.471 Nvme4n1 : 1.01 126.96 7.94 63.48 0.00 312365.26 23301.69 306028.85 00:28:51.471 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.471 Job: Nvme5n1 ended in about 1.01 seconds with error 00:28:51.471 Verification LBA range: start 0x0 length 0x400 00:28:51.471 Nvme5n1 : 1.01 126.36 7.90 63.18 0.00 307243.49 55147.33 299815.06 00:28:51.471 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.471 Job: Nvme6n1 ended in about 1.02 seconds with error 00:28:51.471 Verification LBA range: start 0x0 length 0x400 00:28:51.471 Nvme6n1 : 1.02 125.77 7.86 62.89 0.00 302176.08 27962.03 301368.51 00:28:51.471 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.471 Job: Nvme7n1 ended in about 1.02 seconds with error 00:28:51.471 Verification LBA range: start 0x0 length 0x400 00:28:51.471 Nvme7n1 : 1.02 125.20 7.83 62.60 0.00 297027.76 27185.30 321563.31 00:28:51.471 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.471 Job: Nvme8n1 ended in about 1.03 seconds with error 00:28:51.471 Verification LBA range: start 0x0 length 0x400 00:28:51.471 Nvme8n1 : 1.03 123.89 7.74 61.94 0.00 294123.90 19320.98 309135.74 00:28:51.471 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.471 Job: Nvme9n1 ended in about 1.04 seconds with error 00:28:51.471 Verification LBA range: start 0x0 length 0x400 00:28:51.471 Nvme9n1 : 1.04 123.33 7.71 61.66 0.00 289158.38 26020.22 290494.39 00:28:51.471 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.471 Job: Nvme10n1 ended in about 0.97 seconds with error 00:28:51.471 Verification LBA range: start 0x0 length 0x400 00:28:51.471 Nvme10n1 : 0.97 132.56 8.29 66.28 0.00 258503.36 14078.10 352632.23 00:28:51.471 =================================================================================================================== 00:28:51.471 Total : 1340.30 83.77 637.06 0.00 291689.63 9903.22 352632.23 00:28:51.471 [2024-07-26 16:34:11.189899] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:51.471 [2024-07-26 16:34:11.189999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:51.471 [2024-07-26 16:34:11.190166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:28:51.471 [2024-07-26 16:34:11.190213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:28:51.471 [2024-07-26 16:34:11.190244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:28:51.471 [2024-07-26 16:34:11.190273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:28:51.471 [2024-07-26 16:34:11.190299] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:51.471 [2024-07-26 16:34:11.190331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:51.471 [2024-07-26 16:34:11.190355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:51.471 [2024-07-26 16:34:11.190398] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:51.471 [2024-07-26 16:34:11.190420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:51.471 [2024-07-26 16:34:11.190439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:51.471 [2024-07-26 16:34:11.190468] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.471 [2024-07-26 16:34:11.190490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.471 [2024-07-26 16:34:11.190510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.471 [2024-07-26 16:34:11.190538] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:51.471 [2024-07-26 16:34:11.190559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:51.471 [2024-07-26 16:34:11.190578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:51.471 [2024-07-26 16:34:11.190669] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.730 [2024-07-26 16:34:11.190702] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.730 [2024-07-26 16:34:11.190733] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.730 [2024-07-26 16:34:11.190760] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.730 [2024-07-26 16:34:11.190802] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.730 [2024-07-26 16:34:11.190829] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.730 [2024-07-26 16:34:11.190856] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.730 [2024-07-26 16:34:11.190882] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.730 [2024-07-26 16:34:11.191145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.730 [2024-07-26 16:34:11.191177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.730 [2024-07-26 16:34:11.191205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.730 [2024-07-26 16:34:11.191223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.730 [2024-07-26 16:34:11.191584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.730 [2024-07-26 16:34:11.191639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5980 with addr=10.0.0.2, port=4420 00:28:51.730 [2024-07-26 16:34:11.191667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:28:51.730 [2024-07-26 16:34:11.191830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.730 [2024-07-26 16:34:11.191864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:28:51.730 [2024-07-26 16:34:11.191886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:28:51.730 [2024-07-26 16:34:11.191908] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:51.730 [2024-07-26 16:34:11.191926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:51.730 [2024-07-26 16:34:11.191945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:51.730 [2024-07-26 16:34:11.191974] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:51.730 [2024-07-26 16:34:11.191995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:51.730 [2024-07-26 16:34:11.192014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:51.730 [2024-07-26 16:34:11.192040] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:51.730 [2024-07-26 16:34:11.192069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:51.730 [2024-07-26 16:34:11.192091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:51.730 [2024-07-26 16:34:11.192128] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:51.730 [2024-07-26 16:34:11.192148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:51.730 [2024-07-26 16:34:11.192166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:51.730 [2024-07-26 16:34:11.192242] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.730 [2024-07-26 16:34:11.192274] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.731 [2024-07-26 16:34:11.192300] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.731 [2024-07-26 16:34:11.192337] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.731 [2024-07-26 16:34:11.193478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-26 16:34:11.193510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-26 16:34:11.193529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-26 16:34:11.193545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-26 16:34:11.193591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:28:51.731 [2024-07-26 16:34:11.193624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:28:51.731 [2024-07-26 16:34:11.193772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:51.731 [2024-07-26 16:34:11.193809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.731 [2024-07-26 16:34:11.193836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:51.731 [2024-07-26 16:34:11.193904] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:51.731 [2024-07-26 16:34:11.193929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:51.731 [2024-07-26 16:34:11.193949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:51.731 [2024-07-26 16:34:11.193977] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:51.731 [2024-07-26 16:34:11.193999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:51.731 [2024-07-26 16:34:11.194018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:51.731 [2024-07-26 16:34:11.194103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:51.731 [2024-07-26 16:34:11.194153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-26 16:34:11.194177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-26 16:34:11.194389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.731 [2024-07-26 16:34:11.194425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2c80 with addr=10.0.0.2, port=4420 00:28:51.731 [2024-07-26 16:34:11.194449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:28:51.731 [2024-07-26 16:34:11.194644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.731 [2024-07-26 16:34:11.194679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:28:51.731 [2024-07-26 16:34:11.194702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:28:51.731 [2024-07-26 16:34:11.194891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.731 [2024-07-26 16:34:11.194925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:28:51.731 [2024-07-26 16:34:11.194947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:28:51.731 [2024-07-26 16:34:11.195195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.731 [2024-07-26 16:34:11.195231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=4420 00:28:51.731 [2024-07-26 16:34:11.195254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:28:51.731 [2024-07-26 16:34:11.195282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:28:51.731 [2024-07-26 16:34:11.195311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:28:51.731 [2024-07-26 16:34:11.195339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:28:51.731 [2024-07-26 16:34:11.195408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:28:51.731 [2024-07-26 16:34:11.195439] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:51.731 [2024-07-26 16:34:11.195460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:51.731 [2024-07-26 16:34:11.195485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:51.731 [2024-07-26 16:34:11.195514] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.731 [2024-07-26 16:34:11.195535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.731 [2024-07-26 16:34:11.195554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.731 [2024-07-26 16:34:11.195580] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:51.731 [2024-07-26 16:34:11.195600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:51.731 [2024-07-26 16:34:11.195619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:51.731 [2024-07-26 16:34:11.195699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-26 16:34:11.195725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-26 16:34:11.195743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-26 16:34:11.195777] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:51.731 [2024-07-26 16:34:11.195794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:51.731 [2024-07-26 16:34:11.195812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:51.731 [2024-07-26 16:34:11.195887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.260 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:54.260 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:55.640 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 744073 00:28:55.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (744073) - No such process 00:28:55.640 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:55.640 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:55.640 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:55.640 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:55.640 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:55.640 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:55.640 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:55.640 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:55.640 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:55.640 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:55.640 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:55.640 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:55.640 rmmod nvme_tcp 00:28:55.640 rmmod nvme_fabrics 00:28:55.640 rmmod nvme_keyring 00:28:55.640 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:55.640 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:55.640 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:55.640 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:55.640 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:55.640 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:55.640 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:55.640 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:55.640 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:55.640 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.640 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.640 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.553 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:57.553 00:28:57.553 real 0m11.831s 00:28:57.553 user 0m34.887s 00:28:57.553 sys 0m1.979s 00:28:57.553 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:57.553 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:57.553 ************************************ 00:28:57.553 END TEST nvmf_shutdown_tc3 00:28:57.553 ************************************ 00:28:57.553 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:57.553 00:28:57.553 real 0m42.775s 00:28:57.553 user 2m15.460s 00:28:57.553 sys 0m8.196s 00:28:57.553 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:57.553 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:57.553 ************************************ 00:28:57.553 END TEST nvmf_shutdown 00:28:57.553 ************************************ 00:28:57.553 16:34:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:28:57.553 00:28:57.553 real 17m2.671s 00:28:57.553 user 47m19.070s 00:28:57.553 sys 3m24.054s 00:28:57.553 16:34:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:57.553 16:34:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:57.553 ************************************ 00:28:57.553 END TEST nvmf_target_extra 00:28:57.553 ************************************ 00:28:57.553 16:34:17 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:57.553 16:34:17 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:57.554 16:34:17 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:57.554 16:34:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:57.554 ************************************ 00:28:57.554 START TEST nvmf_host 00:28:57.554 ************************************ 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:57.554 * Looking for test storage... 00:28:57.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.554 ************************************ 00:28:57.554 START TEST nvmf_multicontroller 00:28:57.554 ************************************ 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:57.554 * Looking for test storage... 00:28:57.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:57.554 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.812 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:57.813 16:34:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:59.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:59.716 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:59.716 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:59.716 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.716 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:59.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:28:59.717 00:28:59.717 --- 10.0.0.2 ping statistics --- 00:28:59.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.717 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:28:59.717 00:28:59.717 --- 10.0.0.1 ping statistics --- 00:28:59.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.717 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=747014 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 747014 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 747014 ']' 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:59.717 16:34:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:59.717 [2024-07-26 16:34:19.410662] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:59.717 [2024-07-26 16:34:19.410811] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.975 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.975 [2024-07-26 16:34:19.550390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:00.233 [2024-07-26 16:34:19.805483] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.233 [2024-07-26 16:34:19.805562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.233 [2024-07-26 16:34:19.805607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.233 [2024-07-26 16:34:19.805628] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.233 [2024-07-26 16:34:19.805650] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.233 [2024-07-26 16:34:19.805789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.233 [2024-07-26 16:34:19.805883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.233 [2024-07-26 16:34:19.805892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.798 [2024-07-26 16:34:20.408327] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.798 Malloc0 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.798 [2024-07-26 16:34:20.523659] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.798 [2024-07-26 16:34:20.531485] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:00.798 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.799 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:00.799 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.799 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.056 Malloc1 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.056 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.057 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.057 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=747167 00:29:01.057 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:01.057 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 747167 /var/tmp/bdevperf.sock 00:29:01.057 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 747167 ']' 00:29:01.057 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:01.057 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:01.057 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:01.057 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:01.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:01.057 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:01.057 16:34:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.992 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:01.992 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:01.992 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:01.992 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.992 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.250 NVMe0n1 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.250 1 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.250 request: 00:29:02.250 { 00:29:02.250 "name": "NVMe0", 00:29:02.250 "trtype": "tcp", 00:29:02.250 "traddr": "10.0.0.2", 00:29:02.250 "adrfam": "ipv4", 00:29:02.250 "trsvcid": "4420", 00:29:02.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:02.250 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:02.250 "hostaddr": "10.0.0.2", 00:29:02.250 "hostsvcid": "60000", 00:29:02.250 "prchk_reftag": false, 00:29:02.250 "prchk_guard": false, 00:29:02.250 "hdgst": false, 00:29:02.250 "ddgst": false, 00:29:02.250 "method": "bdev_nvme_attach_controller", 00:29:02.250 "req_id": 1 00:29:02.250 } 00:29:02.250 Got JSON-RPC error response 00:29:02.250 response: 00:29:02.250 { 00:29:02.250 "code": -114, 00:29:02.250 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:02.250 } 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.250 request: 00:29:02.250 { 00:29:02.250 "name": "NVMe0", 00:29:02.250 "trtype": "tcp", 00:29:02.250 "traddr": "10.0.0.2", 00:29:02.250 "adrfam": "ipv4", 00:29:02.250 "trsvcid": "4420", 00:29:02.250 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:02.250 "hostaddr": "10.0.0.2", 00:29:02.250 "hostsvcid": "60000", 00:29:02.250 "prchk_reftag": false, 00:29:02.250 "prchk_guard": false, 00:29:02.250 "hdgst": false, 00:29:02.250 "ddgst": false, 00:29:02.250 "method": "bdev_nvme_attach_controller", 00:29:02.250 "req_id": 1 00:29:02.250 } 00:29:02.250 Got JSON-RPC error response 00:29:02.250 response: 00:29:02.250 { 00:29:02.250 "code": -114, 00:29:02.250 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:02.250 } 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.250 request: 00:29:02.250 { 00:29:02.250 "name": "NVMe0", 00:29:02.250 "trtype": "tcp", 00:29:02.250 "traddr": "10.0.0.2", 00:29:02.250 "adrfam": "ipv4", 00:29:02.250 "trsvcid": "4420", 00:29:02.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:02.250 "hostaddr": "10.0.0.2", 00:29:02.250 "hostsvcid": "60000", 00:29:02.250 "prchk_reftag": false, 00:29:02.250 "prchk_guard": false, 00:29:02.250 "hdgst": false, 00:29:02.250 "ddgst": false, 00:29:02.250 "multipath": "disable", 00:29:02.250 "method": "bdev_nvme_attach_controller", 00:29:02.250 "req_id": 1 00:29:02.250 } 00:29:02.250 Got JSON-RPC error response 00:29:02.250 response: 00:29:02.250 { 00:29:02.250 "code": -114, 00:29:02.250 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:29:02.250 } 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.250 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.250 request: 00:29:02.250 { 00:29:02.250 "name": "NVMe0", 00:29:02.250 "trtype": "tcp", 00:29:02.250 "traddr": "10.0.0.2", 00:29:02.250 "adrfam": "ipv4", 00:29:02.250 "trsvcid": "4420", 00:29:02.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:02.250 "hostaddr": "10.0.0.2", 00:29:02.250 "hostsvcid": "60000", 00:29:02.250 "prchk_reftag": false, 00:29:02.250 "prchk_guard": false, 00:29:02.250 "hdgst": false, 00:29:02.250 "ddgst": false, 00:29:02.251 "multipath": "failover", 00:29:02.251 "method": "bdev_nvme_attach_controller", 00:29:02.251 "req_id": 1 00:29:02.251 } 00:29:02.251 Got JSON-RPC error response 00:29:02.251 response: 00:29:02.251 { 00:29:02.251 "code": -114, 00:29:02.251 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:02.251 } 00:29:02.251 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:02.251 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:02.251 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:02.251 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:02.251 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:02.251 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:02.251 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.251 16:34:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.251 00:29:02.251 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.251 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:02.251 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.251 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.251 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.251 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:02.251 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.251 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.511 00:29:02.511 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.511 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:02.511 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:02.511 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.511 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.511 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.511 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:02.511 16:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:03.886 0 00:29:03.886 16:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:03.886 16:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.886 16:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.886 16:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.886 16:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 747167 00:29:03.886 16:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 747167 ']' 00:29:03.886 16:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 747167 00:29:03.886 16:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:03.886 16:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:03.886 16:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 747167 00:29:03.886 16:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:03.886 16:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:03.886 16:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 747167' 00:29:03.886 killing process with pid 747167 00:29:03.886 16:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 747167 00:29:03.886 16:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 747167 00:29:04.824 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:04.824 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.824 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:04.824 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.824 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:04.824 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.824 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:04.824 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:29:04.825 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:04.825 [2024-07-26 16:34:20.717861] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:04.825 [2024-07-26 16:34:20.718024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747167 ] 00:29:04.825 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.825 [2024-07-26 16:34:20.841635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.825 [2024-07-26 16:34:21.081222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.825 [2024-07-26 16:34:22.115753] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name b1401c06-b9d9-4124-87e7-b35a0f64be2e already exists 00:29:04.825 [2024-07-26 16:34:22.115808] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:b1401c06-b9d9-4124-87e7-b35a0f64be2e alias for bdev NVMe1n1 00:29:04.825 [2024-07-26 16:34:22.115842] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:04.825 Running I/O for 1 seconds... 00:29:04.825 00:29:04.825 Latency(us) 00:29:04.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.825 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:04.825 NVMe0n1 : 1.01 11960.17 46.72 0.00 0.00 10640.71 3301.07 14854.83 00:29:04.825 =================================================================================================================== 00:29:04.825 Total : 11960.17 46.72 0.00 0.00 10640.71 3301.07 14854.83 00:29:04.825 Received shutdown signal, test time was about 1.000000 seconds 00:29:04.825 00:29:04.825 Latency(us) 00:29:04.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.825 =================================================================================================================== 00:29:04.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.825 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:04.825 rmmod nvme_tcp 00:29:04.825 rmmod nvme_fabrics 00:29:04.825 rmmod nvme_keyring 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 747014 ']' 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 747014 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 747014 ']' 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 747014 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 747014 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 747014' 00:29:04.825 killing process with pid 747014 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 747014 00:29:04.825 16:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 747014 00:29:06.205 16:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:06.205 16:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:06.205 16:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:06.205 16:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:06.205 16:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:06.205 16:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.205 16:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.205 16:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.741 16:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:08.741 00:29:08.741 real 0m10.693s 00:29:08.741 user 0m22.009s 00:29:08.741 sys 0m2.555s 00:29:08.741 16:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:08.741 16:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.741 ************************************ 00:29:08.741 END TEST nvmf_multicontroller 00:29:08.741 ************************************ 00:29:08.741 16:34:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:08.741 16:34:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:08.741 16:34:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:08.741 16:34:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.741 ************************************ 00:29:08.741 START TEST nvmf_aer 00:29:08.741 ************************************ 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:08.741 * Looking for test storage... 00:29:08.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:29:08.741 16:34:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:10.648 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:10.648 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:10.648 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:10.648 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.648 16:34:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.648 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.648 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.648 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:10.648 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.648 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.648 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.648 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:10.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:29:10.648 00:29:10.648 --- 10.0.0.2 ping statistics --- 00:29:10.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.648 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:29:10.648 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:29:10.648 00:29:10.648 --- 10.0.0.1 ping statistics --- 00:29:10.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.648 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:29:10.648 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.648 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:29:10.648 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:10.648 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.648 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=749642 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 749642 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 749642 ']' 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:10.649 16:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:10.649 [2024-07-26 16:34:30.242221] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:10.649 [2024-07-26 16:34:30.242385] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.649 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.649 [2024-07-26 16:34:30.397123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.908 [2024-07-26 16:34:30.661379] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.908 [2024-07-26 16:34:30.661459] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.908 [2024-07-26 16:34:30.661487] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.908 [2024-07-26 16:34:30.661509] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.908 [2024-07-26 16:34:30.661532] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.908 [2024-07-26 16:34:30.661729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.908 [2024-07-26 16:34:30.661810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.908 [2024-07-26 16:34:30.661893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.908 [2024-07-26 16:34:30.661903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.474 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:11.474 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:29:11.474 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:11.474 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:11.474 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.474 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.474 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:11.474 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.474 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.474 [2024-07-26 16:34:31.223856] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.474 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.474 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:11.474 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.474 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.732 Malloc0 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.732 [2024-07-26 16:34:31.327460] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.732 [ 00:29:11.732 { 00:29:11.732 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:11.732 "subtype": "Discovery", 00:29:11.732 "listen_addresses": [], 00:29:11.732 "allow_any_host": true, 00:29:11.732 "hosts": [] 00:29:11.732 }, 00:29:11.732 { 00:29:11.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.732 "subtype": "NVMe", 00:29:11.732 "listen_addresses": [ 00:29:11.732 { 00:29:11.732 "trtype": "TCP", 00:29:11.732 "adrfam": "IPv4", 00:29:11.732 "traddr": "10.0.0.2", 00:29:11.732 "trsvcid": "4420" 00:29:11.732 } 00:29:11.732 ], 00:29:11.732 "allow_any_host": true, 00:29:11.732 "hosts": [], 00:29:11.732 "serial_number": "SPDK00000000000001", 00:29:11.732 "model_number": "SPDK bdev Controller", 00:29:11.732 "max_namespaces": 2, 00:29:11.732 "min_cntlid": 1, 00:29:11.732 "max_cntlid": 65519, 00:29:11.732 "namespaces": [ 00:29:11.732 { 00:29:11.732 "nsid": 1, 00:29:11.732 "bdev_name": "Malloc0", 00:29:11.732 "name": "Malloc0", 00:29:11.732 "nguid": "211906693B694F1B80A1BF1B90E822E2", 00:29:11.732 "uuid": "21190669-3b69-4f1b-80a1-bf1b90e822e2" 00:29:11.732 } 00:29:11.732 ] 00:29:11.732 } 00:29:11.732 ] 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=749807 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:29:11.732 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:11.732 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.989 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:11.989 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:29:11.989 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:29:11.989 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:11.989 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:11.989 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:29:11.989 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=4 00:29:11.989 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:12.247 Malloc1 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:12.247 [ 00:29:12.247 { 00:29:12.247 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:12.247 "subtype": "Discovery", 00:29:12.247 "listen_addresses": [], 00:29:12.247 "allow_any_host": true, 00:29:12.247 "hosts": [] 00:29:12.247 }, 00:29:12.247 { 00:29:12.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:12.247 "subtype": "NVMe", 00:29:12.247 "listen_addresses": [ 00:29:12.247 { 00:29:12.247 "trtype": "TCP", 00:29:12.247 "adrfam": "IPv4", 00:29:12.247 "traddr": "10.0.0.2", 00:29:12.247 "trsvcid": "4420" 00:29:12.247 } 00:29:12.247 ], 00:29:12.247 "allow_any_host": true, 00:29:12.247 "hosts": [], 00:29:12.247 "serial_number": "SPDK00000000000001", 00:29:12.247 "model_number": "SPDK bdev Controller", 00:29:12.247 "max_namespaces": 2, 00:29:12.247 "min_cntlid": 1, 00:29:12.247 "max_cntlid": 65519, 00:29:12.247 "namespaces": [ 00:29:12.247 { 00:29:12.247 "nsid": 1, 00:29:12.247 "bdev_name": "Malloc0", 00:29:12.247 "name": "Malloc0", 00:29:12.247 "nguid": "211906693B694F1B80A1BF1B90E822E2", 00:29:12.247 "uuid": "21190669-3b69-4f1b-80a1-bf1b90e822e2" 00:29:12.247 }, 00:29:12.247 { 00:29:12.247 "nsid": 2, 00:29:12.247 "bdev_name": "Malloc1", 00:29:12.247 "name": "Malloc1", 00:29:12.247 "nguid": "0CA9A8A2B8404B41874CDC402207CCBE", 00:29:12.247 "uuid": "0ca9a8a2-b840-4b41-874c-dc402207ccbe" 00:29:12.247 } 00:29:12.247 ] 00:29:12.247 } 00:29:12.247 ] 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 749807 00:29:12.247 Asynchronous Event Request test 00:29:12.247 Attaching to 10.0.0.2 00:29:12.247 Attached to 10.0.0.2 00:29:12.247 Registering asynchronous event callbacks... 00:29:12.247 Starting namespace attribute notice tests for all controllers... 00:29:12.247 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:12.247 aer_cb - Changed Namespace 00:29:12.247 Cleaning up... 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.247 16:34:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:12.507 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.507 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:12.507 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.507 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:12.768 rmmod nvme_tcp 00:29:12.768 rmmod nvme_fabrics 00:29:12.768 rmmod nvme_keyring 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 749642 ']' 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 749642 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 749642 ']' 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 749642 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 749642 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 749642' 00:29:12.768 killing process with pid 749642 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 749642 00:29:12.768 16:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 749642 00:29:14.185 16:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:14.185 16:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:14.185 16:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:14.185 16:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:14.185 16:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:14.185 16:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.185 16:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.185 16:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:16.095 00:29:16.095 real 0m7.667s 00:29:16.095 user 0m11.208s 00:29:16.095 sys 0m2.199s 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.095 ************************************ 00:29:16.095 END TEST nvmf_aer 00:29:16.095 ************************************ 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.095 ************************************ 00:29:16.095 START TEST nvmf_async_init 00:29:16.095 ************************************ 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:16.095 * Looking for test storage... 00:29:16.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.095 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0bdcae49c19d4d4aabdb7dca0f59cb55 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:29:16.096 16:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:18.000 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:18.000 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:18.000 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.000 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:18.001 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:18.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:29:18.001 00:29:18.001 --- 10.0.0.2 ping statistics --- 00:29:18.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.001 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:29:18.001 00:29:18.001 --- 10.0.0.1 ping statistics --- 00:29:18.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.001 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=751991 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 751991 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 751991 ']' 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:18.001 16:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.259 [2024-07-26 16:34:37.802725] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:18.259 [2024-07-26 16:34:37.802869] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.259 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.259 [2024-07-26 16:34:37.935396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.519 [2024-07-26 16:34:38.186246] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.519 [2024-07-26 16:34:38.186322] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.519 [2024-07-26 16:34:38.186359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.519 [2024-07-26 16:34:38.186384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.519 [2024-07-26 16:34:38.186406] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.519 [2024-07-26 16:34:38.186460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.088 [2024-07-26 16:34:38.730561] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.088 null0 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0bdcae49c19d4d4aabdb7dca0f59cb55 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.088 [2024-07-26 16:34:38.770870] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.088 16:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.348 nvme0n1 00:29:19.348 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.348 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:19.348 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.348 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.348 [ 00:29:19.348 { 00:29:19.348 "name": "nvme0n1", 00:29:19.348 "aliases": [ 00:29:19.348 "0bdcae49-c19d-4d4a-abdb-7dca0f59cb55" 00:29:19.348 ], 00:29:19.348 "product_name": "NVMe disk", 00:29:19.348 "block_size": 512, 00:29:19.348 "num_blocks": 2097152, 00:29:19.348 "uuid": "0bdcae49-c19d-4d4a-abdb-7dca0f59cb55", 00:29:19.348 "assigned_rate_limits": { 00:29:19.348 "rw_ios_per_sec": 0, 00:29:19.348 "rw_mbytes_per_sec": 0, 00:29:19.348 "r_mbytes_per_sec": 0, 00:29:19.348 "w_mbytes_per_sec": 0 00:29:19.348 }, 00:29:19.348 "claimed": false, 00:29:19.348 "zoned": false, 00:29:19.348 "supported_io_types": { 00:29:19.348 "read": true, 00:29:19.348 "write": true, 00:29:19.348 "unmap": false, 00:29:19.348 "flush": true, 00:29:19.348 "reset": true, 00:29:19.348 "nvme_admin": true, 00:29:19.348 "nvme_io": true, 00:29:19.348 "nvme_io_md": false, 00:29:19.348 "write_zeroes": true, 00:29:19.348 "zcopy": false, 00:29:19.348 "get_zone_info": false, 00:29:19.348 "zone_management": false, 00:29:19.348 "zone_append": false, 00:29:19.348 "compare": true, 00:29:19.348 "compare_and_write": true, 00:29:19.348 "abort": true, 00:29:19.348 "seek_hole": false, 00:29:19.348 "seek_data": false, 00:29:19.348 "copy": true, 00:29:19.348 "nvme_iov_md": false 00:29:19.348 }, 00:29:19.348 "memory_domains": [ 00:29:19.348 { 00:29:19.348 "dma_device_id": "system", 00:29:19.348 "dma_device_type": 1 00:29:19.348 } 00:29:19.348 ], 00:29:19.348 "driver_specific": { 00:29:19.348 "nvme": [ 00:29:19.348 { 00:29:19.348 "trid": { 00:29:19.348 "trtype": "TCP", 00:29:19.348 "adrfam": "IPv4", 00:29:19.348 "traddr": "10.0.0.2", 00:29:19.348 "trsvcid": "4420", 00:29:19.348 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:19.348 }, 00:29:19.348 "ctrlr_data": { 00:29:19.348 "cntlid": 1, 00:29:19.348 "vendor_id": "0x8086", 00:29:19.348 "model_number": "SPDK bdev Controller", 00:29:19.348 "serial_number": "00000000000000000000", 00:29:19.348 "firmware_revision": "24.09", 00:29:19.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:19.348 "oacs": { 00:29:19.348 "security": 0, 00:29:19.348 "format": 0, 00:29:19.348 "firmware": 0, 00:29:19.348 "ns_manage": 0 00:29:19.348 }, 00:29:19.348 "multi_ctrlr": true, 00:29:19.348 "ana_reporting": false 00:29:19.348 }, 00:29:19.348 "vs": { 00:29:19.348 "nvme_version": "1.3" 00:29:19.348 }, 00:29:19.348 "ns_data": { 00:29:19.348 "id": 1, 00:29:19.348 "can_share": true 00:29:19.348 } 00:29:19.348 } 00:29:19.348 ], 00:29:19.348 "mp_policy": "active_passive" 00:29:19.348 } 00:29:19.348 } 00:29:19.348 ] 00:29:19.348 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.348 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:19.348 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.348 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.348 [2024-07-26 16:34:39.027523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:19.348 [2024-07-26 16:34:39.027659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:29:19.607 [2024-07-26 16:34:39.160301] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:19.607 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.607 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:19.607 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.607 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.607 [ 00:29:19.607 { 00:29:19.607 "name": "nvme0n1", 00:29:19.607 "aliases": [ 00:29:19.607 "0bdcae49-c19d-4d4a-abdb-7dca0f59cb55" 00:29:19.607 ], 00:29:19.607 "product_name": "NVMe disk", 00:29:19.607 "block_size": 512, 00:29:19.607 "num_blocks": 2097152, 00:29:19.607 "uuid": "0bdcae49-c19d-4d4a-abdb-7dca0f59cb55", 00:29:19.607 "assigned_rate_limits": { 00:29:19.607 "rw_ios_per_sec": 0, 00:29:19.607 "rw_mbytes_per_sec": 0, 00:29:19.607 "r_mbytes_per_sec": 0, 00:29:19.607 "w_mbytes_per_sec": 0 00:29:19.607 }, 00:29:19.607 "claimed": false, 00:29:19.607 "zoned": false, 00:29:19.607 "supported_io_types": { 00:29:19.607 "read": true, 00:29:19.607 "write": true, 00:29:19.607 "unmap": false, 00:29:19.607 "flush": true, 00:29:19.607 "reset": true, 00:29:19.607 "nvme_admin": true, 00:29:19.607 "nvme_io": true, 00:29:19.607 "nvme_io_md": false, 00:29:19.607 "write_zeroes": true, 00:29:19.607 "zcopy": false, 00:29:19.607 "get_zone_info": false, 00:29:19.607 "zone_management": false, 00:29:19.607 "zone_append": false, 00:29:19.607 "compare": true, 00:29:19.607 "compare_and_write": true, 00:29:19.607 "abort": true, 00:29:19.607 "seek_hole": false, 00:29:19.607 "seek_data": false, 00:29:19.607 "copy": true, 00:29:19.607 "nvme_iov_md": false 00:29:19.607 }, 00:29:19.607 "memory_domains": [ 00:29:19.607 { 00:29:19.607 "dma_device_id": "system", 00:29:19.607 "dma_device_type": 1 00:29:19.607 } 00:29:19.607 ], 00:29:19.607 "driver_specific": { 00:29:19.607 "nvme": [ 00:29:19.607 { 00:29:19.607 "trid": { 00:29:19.607 "trtype": "TCP", 00:29:19.607 "adrfam": "IPv4", 00:29:19.607 "traddr": "10.0.0.2", 00:29:19.607 "trsvcid": "4420", 00:29:19.607 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:19.607 }, 00:29:19.607 "ctrlr_data": { 00:29:19.607 "cntlid": 2, 00:29:19.607 "vendor_id": "0x8086", 00:29:19.607 "model_number": "SPDK bdev Controller", 00:29:19.607 "serial_number": "00000000000000000000", 00:29:19.607 "firmware_revision": "24.09", 00:29:19.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:19.607 "oacs": { 00:29:19.607 "security": 0, 00:29:19.607 "format": 0, 00:29:19.607 "firmware": 0, 00:29:19.607 "ns_manage": 0 00:29:19.607 }, 00:29:19.607 "multi_ctrlr": true, 00:29:19.607 "ana_reporting": false 00:29:19.607 }, 00:29:19.607 "vs": { 00:29:19.607 "nvme_version": "1.3" 00:29:19.607 }, 00:29:19.607 "ns_data": { 00:29:19.607 "id": 1, 00:29:19.607 "can_share": true 00:29:19.607 } 00:29:19.607 } 00:29:19.607 ], 00:29:19.607 "mp_policy": "active_passive" 00:29:19.607 } 00:29:19.607 } 00:29:19.607 ] 00:29:19.607 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.607 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.607 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.607 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.607 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.607 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:19.607 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.3Zy54jaExD 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.3Zy54jaExD 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.608 [2024-07-26 16:34:39.216242] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:19.608 [2024-07-26 16:34:39.216442] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3Zy54jaExD 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.608 [2024-07-26 16:34:39.224229] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3Zy54jaExD 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.608 [2024-07-26 16:34:39.232250] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:19.608 [2024-07-26 16:34:39.232379] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:19.608 nvme0n1 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.608 [ 00:29:19.608 { 00:29:19.608 "name": "nvme0n1", 00:29:19.608 "aliases": [ 00:29:19.608 "0bdcae49-c19d-4d4a-abdb-7dca0f59cb55" 00:29:19.608 ], 00:29:19.608 "product_name": "NVMe disk", 00:29:19.608 "block_size": 512, 00:29:19.608 "num_blocks": 2097152, 00:29:19.608 "uuid": "0bdcae49-c19d-4d4a-abdb-7dca0f59cb55", 00:29:19.608 "assigned_rate_limits": { 00:29:19.608 "rw_ios_per_sec": 0, 00:29:19.608 "rw_mbytes_per_sec": 0, 00:29:19.608 "r_mbytes_per_sec": 0, 00:29:19.608 "w_mbytes_per_sec": 0 00:29:19.608 }, 00:29:19.608 "claimed": false, 00:29:19.608 "zoned": false, 00:29:19.608 "supported_io_types": { 00:29:19.608 "read": true, 00:29:19.608 "write": true, 00:29:19.608 "unmap": false, 00:29:19.608 "flush": true, 00:29:19.608 "reset": true, 00:29:19.608 "nvme_admin": true, 00:29:19.608 "nvme_io": true, 00:29:19.608 "nvme_io_md": false, 00:29:19.608 "write_zeroes": true, 00:29:19.608 "zcopy": false, 00:29:19.608 "get_zone_info": false, 00:29:19.608 "zone_management": false, 00:29:19.608 "zone_append": false, 00:29:19.608 "compare": true, 00:29:19.608 "compare_and_write": true, 00:29:19.608 "abort": true, 00:29:19.608 "seek_hole": false, 00:29:19.608 "seek_data": false, 00:29:19.608 "copy": true, 00:29:19.608 "nvme_iov_md": false 00:29:19.608 }, 00:29:19.608 "memory_domains": [ 00:29:19.608 { 00:29:19.608 "dma_device_id": "system", 00:29:19.608 "dma_device_type": 1 00:29:19.608 } 00:29:19.608 ], 00:29:19.608 "driver_specific": { 00:29:19.608 "nvme": [ 00:29:19.608 { 00:29:19.608 "trid": { 00:29:19.608 "trtype": "TCP", 00:29:19.608 "adrfam": "IPv4", 00:29:19.608 "traddr": "10.0.0.2", 00:29:19.608 "trsvcid": "4421", 00:29:19.608 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:19.608 }, 00:29:19.608 "ctrlr_data": { 00:29:19.608 "cntlid": 3, 00:29:19.608 "vendor_id": "0x8086", 00:29:19.608 "model_number": "SPDK bdev Controller", 00:29:19.608 "serial_number": "00000000000000000000", 00:29:19.608 "firmware_revision": "24.09", 00:29:19.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:19.608 "oacs": { 00:29:19.608 "security": 0, 00:29:19.608 "format": 0, 00:29:19.608 "firmware": 0, 00:29:19.608 "ns_manage": 0 00:29:19.608 }, 00:29:19.608 "multi_ctrlr": true, 00:29:19.608 "ana_reporting": false 00:29:19.608 }, 00:29:19.608 "vs": { 00:29:19.608 "nvme_version": "1.3" 00:29:19.608 }, 00:29:19.608 "ns_data": { 00:29:19.608 "id": 1, 00:29:19.608 "can_share": true 00:29:19.608 } 00:29:19.608 } 00:29:19.608 ], 00:29:19.608 "mp_policy": "active_passive" 00:29:19.608 } 00:29:19.608 } 00:29:19.608 ] 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.3Zy54jaExD 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:19.608 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:19.608 rmmod nvme_tcp 00:29:19.608 rmmod nvme_fabrics 00:29:19.866 rmmod nvme_keyring 00:29:19.866 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:19.866 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:29:19.866 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:29:19.866 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 751991 ']' 00:29:19.866 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 751991 00:29:19.866 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 751991 ']' 00:29:19.866 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 751991 00:29:19.866 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:29:19.866 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:19.866 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 751991 00:29:19.866 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:19.866 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:19.866 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 751991' 00:29:19.866 killing process with pid 751991 00:29:19.866 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 751991 00:29:19.866 [2024-07-26 16:34:39.422922] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:19.866 [2024-07-26 16:34:39.422971] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:19.866 16:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 751991 00:29:21.247 16:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:21.247 16:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:21.247 16:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:21.247 16:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:21.247 16:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:21.247 16:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.247 16:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.247 16:34:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.149 16:34:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:23.149 00:29:23.149 real 0m6.946s 00:29:23.149 user 0m3.703s 00:29:23.149 sys 0m1.850s 00:29:23.149 16:34:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:23.149 16:34:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:23.149 ************************************ 00:29:23.149 END TEST nvmf_async_init 00:29:23.150 ************************************ 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.150 ************************************ 00:29:23.150 START TEST dma 00:29:23.150 ************************************ 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:23.150 * Looking for test storage... 00:29:23.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:23.150 00:29:23.150 real 0m0.069s 00:29:23.150 user 0m0.027s 00:29:23.150 sys 0m0.048s 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:23.150 ************************************ 00:29:23.150 END TEST dma 00:29:23.150 ************************************ 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.150 ************************************ 00:29:23.150 START TEST nvmf_identify 00:29:23.150 ************************************ 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:23.150 * Looking for test storage... 00:29:23.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.150 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:29:23.151 16:34:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.685 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.685 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:29:25.685 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:25.685 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:25.685 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:25.686 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:25.686 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:25.686 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:25.686 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.686 16:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:25.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:29:25.686 00:29:25.686 --- 10.0.0.2 ping statistics --- 00:29:25.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.686 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:29:25.686 00:29:25.686 --- 10.0.0.1 ping statistics --- 00:29:25.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.686 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=754250 00:29:25.686 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:25.687 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:25.687 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 754250 00:29:25.687 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 754250 ']' 00:29:25.687 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.687 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:25.687 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.687 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:25.687 16:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.687 [2024-07-26 16:34:45.199854] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:25.687 [2024-07-26 16:34:45.200019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.687 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.687 [2024-07-26 16:34:45.355250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.946 [2024-07-26 16:34:45.617004] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.946 [2024-07-26 16:34:45.617099] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.946 [2024-07-26 16:34:45.617126] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.946 [2024-07-26 16:34:45.617145] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.946 [2024-07-26 16:34:45.617164] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.946 [2024-07-26 16:34:45.617283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.946 [2024-07-26 16:34:45.617333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.946 [2024-07-26 16:34:45.617380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.946 [2024-07-26 16:34:45.617391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:26.512 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:26.512 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:29:26.512 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:26.512 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.512 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:26.512 [2024-07-26 16:34:46.142965] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.512 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.512 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:26.512 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:26.512 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:26.513 Malloc0 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:26.513 [2024-07-26 16:34:46.266219] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.513 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:26.772 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.772 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:26.772 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.773 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:26.773 [ 00:29:26.773 { 00:29:26.773 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:26.773 "subtype": "Discovery", 00:29:26.773 "listen_addresses": [ 00:29:26.773 { 00:29:26.773 "trtype": "TCP", 00:29:26.773 "adrfam": "IPv4", 00:29:26.773 "traddr": "10.0.0.2", 00:29:26.773 "trsvcid": "4420" 00:29:26.773 } 00:29:26.773 ], 00:29:26.773 "allow_any_host": true, 00:29:26.773 "hosts": [] 00:29:26.773 }, 00:29:26.773 { 00:29:26.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:26.773 "subtype": "NVMe", 00:29:26.773 "listen_addresses": [ 00:29:26.773 { 00:29:26.773 "trtype": "TCP", 00:29:26.773 "adrfam": "IPv4", 00:29:26.773 "traddr": "10.0.0.2", 00:29:26.773 "trsvcid": "4420" 00:29:26.773 } 00:29:26.773 ], 00:29:26.773 "allow_any_host": true, 00:29:26.773 "hosts": [], 00:29:26.773 "serial_number": "SPDK00000000000001", 00:29:26.773 "model_number": "SPDK bdev Controller", 00:29:26.773 "max_namespaces": 32, 00:29:26.773 "min_cntlid": 1, 00:29:26.773 "max_cntlid": 65519, 00:29:26.773 "namespaces": [ 00:29:26.773 { 00:29:26.773 "nsid": 1, 00:29:26.773 "bdev_name": "Malloc0", 00:29:26.773 "name": "Malloc0", 00:29:26.773 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:26.773 "eui64": "ABCDEF0123456789", 00:29:26.773 "uuid": "24b3adab-b617-49fb-aa3a-e4b433462fe8" 00:29:26.773 } 00:29:26.773 ] 00:29:26.773 } 00:29:26.773 ] 00:29:26.773 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.773 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:26.773 [2024-07-26 16:34:46.326931] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:26.773 [2024-07-26 16:34:46.327026] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754406 ] 00:29:26.773 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.773 [2024-07-26 16:34:46.383451] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:26.773 [2024-07-26 16:34:46.383581] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:26.773 [2024-07-26 16:34:46.383602] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:26.773 [2024-07-26 16:34:46.383628] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:26.773 [2024-07-26 16:34:46.383652] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:26.773 [2024-07-26 16:34:46.387148] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:26.773 [2024-07-26 16:34:46.387242] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:29:26.773 [2024-07-26 16:34:46.394080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:26.773 [2024-07-26 16:34:46.394113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:26.773 [2024-07-26 16:34:46.394129] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:26.773 [2024-07-26 16:34:46.394140] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:26.773 [2024-07-26 16:34:46.394218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.394242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.394262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:26.773 [2024-07-26 16:34:46.394296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:26.773 [2024-07-26 16:34:46.394366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:26.773 [2024-07-26 16:34:46.401101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:26.773 [2024-07-26 16:34:46.401132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:26.773 [2024-07-26 16:34:46.401145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.401160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:26.773 [2024-07-26 16:34:46.401198] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:26.773 [2024-07-26 16:34:46.401225] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:26.773 [2024-07-26 16:34:46.401246] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:26.773 [2024-07-26 16:34:46.401282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.401302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.401315] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:26.773 [2024-07-26 16:34:46.401336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.773 [2024-07-26 16:34:46.401396] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:26.773 [2024-07-26 16:34:46.401638] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:26.773 [2024-07-26 16:34:46.401662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:26.773 [2024-07-26 16:34:46.401675] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.401689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:26.773 [2024-07-26 16:34:46.401711] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:26.773 [2024-07-26 16:34:46.401755] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:26.773 [2024-07-26 16:34:46.401778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.401792] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.401805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:26.773 [2024-07-26 16:34:46.401829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.773 [2024-07-26 16:34:46.401870] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:26.773 [2024-07-26 16:34:46.402130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:26.773 [2024-07-26 16:34:46.402153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:26.773 [2024-07-26 16:34:46.402165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.402192] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:26.773 [2024-07-26 16:34:46.402208] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:26.773 [2024-07-26 16:34:46.402234] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:26.773 [2024-07-26 16:34:46.402256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.402270] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.402289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:26.773 [2024-07-26 16:34:46.402309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.773 [2024-07-26 16:34:46.402374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:26.773 [2024-07-26 16:34:46.402573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:26.773 [2024-07-26 16:34:46.402599] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:26.773 [2024-07-26 16:34:46.402613] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.402625] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:26.773 [2024-07-26 16:34:46.402640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:26.773 [2024-07-26 16:34:46.402669] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.402687] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.402699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:26.773 [2024-07-26 16:34:46.402734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.773 [2024-07-26 16:34:46.402766] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:26.773 [2024-07-26 16:34:46.402963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:26.773 [2024-07-26 16:34:46.402985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:26.773 [2024-07-26 16:34:46.403001] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.403013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:26.773 [2024-07-26 16:34:46.403028] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:26.773 [2024-07-26 16:34:46.403081] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:26.773 [2024-07-26 16:34:46.403129] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:26.773 [2024-07-26 16:34:46.403251] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:26.773 [2024-07-26 16:34:46.403265] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:26.773 [2024-07-26 16:34:46.403290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.773 [2024-07-26 16:34:46.403305] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.403317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:26.774 [2024-07-26 16:34:46.403337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.774 [2024-07-26 16:34:46.403380] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:26.774 [2024-07-26 16:34:46.403592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:26.774 [2024-07-26 16:34:46.403615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:26.774 [2024-07-26 16:34:46.403627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.403638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:26.774 [2024-07-26 16:34:46.403653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:26.774 [2024-07-26 16:34:46.403691] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.403723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.403735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:26.774 [2024-07-26 16:34:46.403757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.774 [2024-07-26 16:34:46.403789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:26.774 [2024-07-26 16:34:46.403985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:26.774 [2024-07-26 16:34:46.404011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:26.774 [2024-07-26 16:34:46.404024] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.404035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:26.774 [2024-07-26 16:34:46.404075] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:26.774 [2024-07-26 16:34:46.404119] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:26.774 [2024-07-26 16:34:46.404143] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:26.774 [2024-07-26 16:34:46.404167] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:26.774 [2024-07-26 16:34:46.404201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.404220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:26.774 [2024-07-26 16:34:46.404240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.774 [2024-07-26 16:34:46.404272] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:26.774 [2024-07-26 16:34:46.404522] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:26.774 [2024-07-26 16:34:46.404552] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:26.774 [2024-07-26 16:34:46.404595] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.404617] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:29:26.774 [2024-07-26 16:34:46.404634] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:26.774 [2024-07-26 16:34:46.404663] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.404700] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.404724] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.404762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:26.774 [2024-07-26 16:34:46.404783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:26.774 [2024-07-26 16:34:46.404794] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.404806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:26.774 [2024-07-26 16:34:46.404834] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:26.774 [2024-07-26 16:34:46.404851] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:26.774 [2024-07-26 16:34:46.404879] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:26.774 [2024-07-26 16:34:46.404894] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:26.774 [2024-07-26 16:34:46.404907] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:26.774 [2024-07-26 16:34:46.404926] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:26.774 [2024-07-26 16:34:46.404967] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:26.774 [2024-07-26 16:34:46.404992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.405020] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.405033] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:26.774 [2024-07-26 16:34:46.405056] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:26.774 [2024-07-26 16:34:46.409119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:26.774 [2024-07-26 16:34:46.409305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:26.774 [2024-07-26 16:34:46.409333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:26.774 [2024-07-26 16:34:46.409347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.409358] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:26.774 [2024-07-26 16:34:46.409379] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.409394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.409407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:26.774 [2024-07-26 16:34:46.409426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.774 [2024-07-26 16:34:46.409444] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.409456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.409467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:29:26.774 [2024-07-26 16:34:46.409483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.774 [2024-07-26 16:34:46.409506] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.409519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.409530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:29:26.774 [2024-07-26 16:34:46.409547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.774 [2024-07-26 16:34:46.409563] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.409575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.409586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:26.774 [2024-07-26 16:34:46.409602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.774 [2024-07-26 16:34:46.409617] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:26.774 [2024-07-26 16:34:46.409647] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:26.774 [2024-07-26 16:34:46.409675] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.409689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:26.774 [2024-07-26 16:34:46.409722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.774 [2024-07-26 16:34:46.409760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:26.774 [2024-07-26 16:34:46.409783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:29:26.774 [2024-07-26 16:34:46.409797] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:29:26.774 [2024-07-26 16:34:46.409810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:26.774 [2024-07-26 16:34:46.409823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:26.774 [2024-07-26 16:34:46.410039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:26.774 [2024-07-26 16:34:46.410071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:26.774 [2024-07-26 16:34:46.410086] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.410097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:26.774 [2024-07-26 16:34:46.410115] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:26.774 [2024-07-26 16:34:46.410130] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:26.774 [2024-07-26 16:34:46.410172] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.774 [2024-07-26 16:34:46.410191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:26.774 [2024-07-26 16:34:46.410212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.775 [2024-07-26 16:34:46.410245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:26.775 [2024-07-26 16:34:46.410466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:26.775 [2024-07-26 16:34:46.410506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:26.775 [2024-07-26 16:34:46.410528] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.410554] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:29:26.775 [2024-07-26 16:34:46.410568] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:26.775 [2024-07-26 16:34:46.410581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.410618] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.410635] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.451232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:26.775 [2024-07-26 16:34:46.451264] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:26.775 [2024-07-26 16:34:46.451278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.451292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:26.775 [2024-07-26 16:34:46.451333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:26.775 [2024-07-26 16:34:46.451406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.451440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:26.775 [2024-07-26 16:34:46.451468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.775 [2024-07-26 16:34:46.451488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.451517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.451529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:26.775 [2024-07-26 16:34:46.451547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.775 [2024-07-26 16:34:46.451585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:26.775 [2024-07-26 16:34:46.451620] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:26.775 [2024-07-26 16:34:46.451937] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:26.775 [2024-07-26 16:34:46.451959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:26.775 [2024-07-26 16:34:46.451971] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.451983] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:29:26.775 [2024-07-26 16:34:46.451995] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:29:26.775 [2024-07-26 16:34:46.452014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.452032] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.456076] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.456106] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:26.775 [2024-07-26 16:34:46.456123] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:26.775 [2024-07-26 16:34:46.456134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.456146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:26.775 [2024-07-26 16:34:46.496103] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:26.775 [2024-07-26 16:34:46.496132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:26.775 [2024-07-26 16:34:46.496144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.496156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:26.775 [2024-07-26 16:34:46.496192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.496209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:26.775 [2024-07-26 16:34:46.496231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.775 [2024-07-26 16:34:46.496276] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:26.775 [2024-07-26 16:34:46.496537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:26.775 [2024-07-26 16:34:46.496566] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:26.775 [2024-07-26 16:34:46.496601] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.496621] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:29:26.775 [2024-07-26 16:34:46.496639] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:29:26.775 [2024-07-26 16:34:46.496667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.496700] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.496722] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.496768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:26.775 [2024-07-26 16:34:46.496790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:26.775 [2024-07-26 16:34:46.496801] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.496812] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:26.775 [2024-07-26 16:34:46.496839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.496856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:26.775 [2024-07-26 16:34:46.496888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.775 [2024-07-26 16:34:46.496958] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:26.775 [2024-07-26 16:34:46.497228] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:26.775 [2024-07-26 16:34:46.497251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:26.775 [2024-07-26 16:34:46.497263] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.497275] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:29:26.775 [2024-07-26 16:34:46.497287] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:29:26.775 [2024-07-26 16:34:46.497299] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.497337] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:26.775 [2024-07-26 16:34:46.497351] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:27.038 [2024-07-26 16:34:46.542113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.038 [2024-07-26 16:34:46.542161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.038 [2024-07-26 16:34:46.542175] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.038 [2024-07-26 16:34:46.542187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:27.038 ===================================================== 00:29:27.038 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:27.038 ===================================================== 00:29:27.038 Controller Capabilities/Features 00:29:27.038 ================================ 00:29:27.038 Vendor ID: 0000 00:29:27.038 Subsystem Vendor ID: 0000 00:29:27.038 Serial Number: .................... 00:29:27.038 Model Number: ........................................ 00:29:27.038 Firmware Version: 24.09 00:29:27.038 Recommended Arb Burst: 0 00:29:27.038 IEEE OUI Identifier: 00 00 00 00:29:27.038 Multi-path I/O 00:29:27.038 May have multiple subsystem ports: No 00:29:27.038 May have multiple controllers: No 00:29:27.038 Associated with SR-IOV VF: No 00:29:27.038 Max Data Transfer Size: 131072 00:29:27.038 Max Number of Namespaces: 0 00:29:27.038 Max Number of I/O Queues: 1024 00:29:27.038 NVMe Specification Version (VS): 1.3 00:29:27.038 NVMe Specification Version (Identify): 1.3 00:29:27.038 Maximum Queue Entries: 128 00:29:27.038 Contiguous Queues Required: Yes 00:29:27.038 Arbitration Mechanisms Supported 00:29:27.038 Weighted Round Robin: Not Supported 00:29:27.038 Vendor Specific: Not Supported 00:29:27.038 Reset Timeout: 15000 ms 00:29:27.038 Doorbell Stride: 4 bytes 00:29:27.038 NVM Subsystem Reset: Not Supported 00:29:27.038 Command Sets Supported 00:29:27.038 NVM Command Set: Supported 00:29:27.038 Boot Partition: Not Supported 00:29:27.038 Memory Page Size Minimum: 4096 bytes 00:29:27.038 Memory Page Size Maximum: 4096 bytes 00:29:27.038 Persistent Memory Region: Not Supported 00:29:27.038 Optional Asynchronous Events Supported 00:29:27.038 Namespace Attribute Notices: Not Supported 00:29:27.038 Firmware Activation Notices: Not Supported 00:29:27.038 ANA Change Notices: Not Supported 00:29:27.038 PLE Aggregate Log Change Notices: Not Supported 00:29:27.038 LBA Status Info Alert Notices: Not Supported 00:29:27.038 EGE Aggregate Log Change Notices: Not Supported 00:29:27.038 Normal NVM Subsystem Shutdown event: Not Supported 00:29:27.038 Zone Descriptor Change Notices: Not Supported 00:29:27.038 Discovery Log Change Notices: Supported 00:29:27.038 Controller Attributes 00:29:27.038 128-bit Host Identifier: Not Supported 00:29:27.038 Non-Operational Permissive Mode: Not Supported 00:29:27.038 NVM Sets: Not Supported 00:29:27.038 Read Recovery Levels: Not Supported 00:29:27.038 Endurance Groups: Not Supported 00:29:27.038 Predictable Latency Mode: Not Supported 00:29:27.038 Traffic Based Keep ALive: Not Supported 00:29:27.038 Namespace Granularity: Not Supported 00:29:27.038 SQ Associations: Not Supported 00:29:27.038 UUID List: Not Supported 00:29:27.038 Multi-Domain Subsystem: Not Supported 00:29:27.038 Fixed Capacity Management: Not Supported 00:29:27.038 Variable Capacity Management: Not Supported 00:29:27.038 Delete Endurance Group: Not Supported 00:29:27.038 Delete NVM Set: Not Supported 00:29:27.038 Extended LBA Formats Supported: Not Supported 00:29:27.038 Flexible Data Placement Supported: Not Supported 00:29:27.038 00:29:27.038 Controller Memory Buffer Support 00:29:27.038 ================================ 00:29:27.038 Supported: No 00:29:27.038 00:29:27.038 Persistent Memory Region Support 00:29:27.038 ================================ 00:29:27.038 Supported: No 00:29:27.038 00:29:27.038 Admin Command Set Attributes 00:29:27.038 ============================ 00:29:27.038 Security Send/Receive: Not Supported 00:29:27.038 Format NVM: Not Supported 00:29:27.038 Firmware Activate/Download: Not Supported 00:29:27.038 Namespace Management: Not Supported 00:29:27.038 Device Self-Test: Not Supported 00:29:27.038 Directives: Not Supported 00:29:27.038 NVMe-MI: Not Supported 00:29:27.038 Virtualization Management: Not Supported 00:29:27.038 Doorbell Buffer Config: Not Supported 00:29:27.038 Get LBA Status Capability: Not Supported 00:29:27.038 Command & Feature Lockdown Capability: Not Supported 00:29:27.038 Abort Command Limit: 1 00:29:27.038 Async Event Request Limit: 4 00:29:27.038 Number of Firmware Slots: N/A 00:29:27.038 Firmware Slot 1 Read-Only: N/A 00:29:27.038 Firmware Activation Without Reset: N/A 00:29:27.038 Multiple Update Detection Support: N/A 00:29:27.038 Firmware Update Granularity: No Information Provided 00:29:27.038 Per-Namespace SMART Log: No 00:29:27.039 Asymmetric Namespace Access Log Page: Not Supported 00:29:27.039 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:27.039 Command Effects Log Page: Not Supported 00:29:27.039 Get Log Page Extended Data: Supported 00:29:27.039 Telemetry Log Pages: Not Supported 00:29:27.039 Persistent Event Log Pages: Not Supported 00:29:27.039 Supported Log Pages Log Page: May Support 00:29:27.039 Commands Supported & Effects Log Page: Not Supported 00:29:27.039 Feature Identifiers & Effects Log Page:May Support 00:29:27.039 NVMe-MI Commands & Effects Log Page: May Support 00:29:27.039 Data Area 4 for Telemetry Log: Not Supported 00:29:27.039 Error Log Page Entries Supported: 128 00:29:27.039 Keep Alive: Not Supported 00:29:27.039 00:29:27.039 NVM Command Set Attributes 00:29:27.039 ========================== 00:29:27.039 Submission Queue Entry Size 00:29:27.039 Max: 1 00:29:27.039 Min: 1 00:29:27.039 Completion Queue Entry Size 00:29:27.039 Max: 1 00:29:27.039 Min: 1 00:29:27.039 Number of Namespaces: 0 00:29:27.039 Compare Command: Not Supported 00:29:27.039 Write Uncorrectable Command: Not Supported 00:29:27.039 Dataset Management Command: Not Supported 00:29:27.039 Write Zeroes Command: Not Supported 00:29:27.039 Set Features Save Field: Not Supported 00:29:27.039 Reservations: Not Supported 00:29:27.039 Timestamp: Not Supported 00:29:27.039 Copy: Not Supported 00:29:27.039 Volatile Write Cache: Not Present 00:29:27.039 Atomic Write Unit (Normal): 1 00:29:27.039 Atomic Write Unit (PFail): 1 00:29:27.039 Atomic Compare & Write Unit: 1 00:29:27.039 Fused Compare & Write: Supported 00:29:27.039 Scatter-Gather List 00:29:27.039 SGL Command Set: Supported 00:29:27.039 SGL Keyed: Supported 00:29:27.039 SGL Bit Bucket Descriptor: Not Supported 00:29:27.039 SGL Metadata Pointer: Not Supported 00:29:27.039 Oversized SGL: Not Supported 00:29:27.039 SGL Metadata Address: Not Supported 00:29:27.039 SGL Offset: Supported 00:29:27.039 Transport SGL Data Block: Not Supported 00:29:27.039 Replay Protected Memory Block: Not Supported 00:29:27.039 00:29:27.039 Firmware Slot Information 00:29:27.039 ========================= 00:29:27.039 Active slot: 0 00:29:27.039 00:29:27.039 00:29:27.039 Error Log 00:29:27.039 ========= 00:29:27.039 00:29:27.039 Active Namespaces 00:29:27.039 ================= 00:29:27.039 Discovery Log Page 00:29:27.039 ================== 00:29:27.039 Generation Counter: 2 00:29:27.039 Number of Records: 2 00:29:27.039 Record Format: 0 00:29:27.039 00:29:27.039 Discovery Log Entry 0 00:29:27.039 ---------------------- 00:29:27.039 Transport Type: 3 (TCP) 00:29:27.039 Address Family: 1 (IPv4) 00:29:27.039 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:27.039 Entry Flags: 00:29:27.039 Duplicate Returned Information: 1 00:29:27.039 Explicit Persistent Connection Support for Discovery: 1 00:29:27.039 Transport Requirements: 00:29:27.039 Secure Channel: Not Required 00:29:27.039 Port ID: 0 (0x0000) 00:29:27.039 Controller ID: 65535 (0xffff) 00:29:27.039 Admin Max SQ Size: 128 00:29:27.039 Transport Service Identifier: 4420 00:29:27.039 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:27.039 Transport Address: 10.0.0.2 00:29:27.039 Discovery Log Entry 1 00:29:27.039 ---------------------- 00:29:27.039 Transport Type: 3 (TCP) 00:29:27.039 Address Family: 1 (IPv4) 00:29:27.039 Subsystem Type: 2 (NVM Subsystem) 00:29:27.039 Entry Flags: 00:29:27.039 Duplicate Returned Information: 0 00:29:27.039 Explicit Persistent Connection Support for Discovery: 0 00:29:27.039 Transport Requirements: 00:29:27.039 Secure Channel: Not Required 00:29:27.039 Port ID: 0 (0x0000) 00:29:27.039 Controller ID: 65535 (0xffff) 00:29:27.039 Admin Max SQ Size: 128 00:29:27.039 Transport Service Identifier: 4420 00:29:27.039 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:27.039 Transport Address: 10.0.0.2 [2024-07-26 16:34:46.542401] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:27.039 [2024-07-26 16:34:46.542448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:27.039 [2024-07-26 16:34:46.542472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.039 [2024-07-26 16:34:46.542487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:29:27.039 [2024-07-26 16:34:46.542501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.039 [2024-07-26 16:34:46.542513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:29:27.039 [2024-07-26 16:34:46.542527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.039 [2024-07-26 16:34:46.542539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.039 [2024-07-26 16:34:46.542552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.039 [2024-07-26 16:34:46.542574] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.039 [2024-07-26 16:34:46.542588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.039 [2024-07-26 16:34:46.542600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.039 [2024-07-26 16:34:46.542620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.039 [2024-07-26 16:34:46.542662] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.039 [2024-07-26 16:34:46.542832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.039 [2024-07-26 16:34:46.542855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.039 [2024-07-26 16:34:46.542868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.039 [2024-07-26 16:34:46.542880] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.039 [2024-07-26 16:34:46.542906] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.039 [2024-07-26 16:34:46.542925] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.039 [2024-07-26 16:34:46.542938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.039 [2024-07-26 16:34:46.542958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.039 [2024-07-26 16:34:46.543022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.039 [2024-07-26 16:34:46.543262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.039 [2024-07-26 16:34:46.543286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.039 [2024-07-26 16:34:46.543298] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.039 [2024-07-26 16:34:46.543309] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.039 [2024-07-26 16:34:46.543324] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:27.039 [2024-07-26 16:34:46.543339] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:27.039 [2024-07-26 16:34:46.543377] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.039 [2024-07-26 16:34:46.543419] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.039 [2024-07-26 16:34:46.543431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.039 [2024-07-26 16:34:46.543451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.039 [2024-07-26 16:34:46.543500] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.039 [2024-07-26 16:34:46.543675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.039 [2024-07-26 16:34:46.543697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.039 [2024-07-26 16:34:46.543709] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.039 [2024-07-26 16:34:46.543720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.039 [2024-07-26 16:34:46.543749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.039 [2024-07-26 16:34:46.543765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.039 [2024-07-26 16:34:46.543776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.040 [2024-07-26 16:34:46.543794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.040 [2024-07-26 16:34:46.543838] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.040 [2024-07-26 16:34:46.544034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.040 [2024-07-26 16:34:46.544082] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.040 [2024-07-26 16:34:46.544096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.544107] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.040 [2024-07-26 16:34:46.544136] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.544154] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.544165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.040 [2024-07-26 16:34:46.544199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.040 [2024-07-26 16:34:46.544230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.040 [2024-07-26 16:34:46.544438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.040 [2024-07-26 16:34:46.544460] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.040 [2024-07-26 16:34:46.544471] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.544486] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.040 [2024-07-26 16:34:46.544515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.544532] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.544558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.040 [2024-07-26 16:34:46.544576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.040 [2024-07-26 16:34:46.544605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.040 [2024-07-26 16:34:46.544772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.040 [2024-07-26 16:34:46.544794] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.040 [2024-07-26 16:34:46.544806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.544817] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.040 [2024-07-26 16:34:46.544845] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.544862] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.544873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.040 [2024-07-26 16:34:46.544891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.040 [2024-07-26 16:34:46.544935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.040 [2024-07-26 16:34:46.545135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.040 [2024-07-26 16:34:46.545166] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.040 [2024-07-26 16:34:46.545178] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.545190] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.040 [2024-07-26 16:34:46.545218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.545235] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.545246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.040 [2024-07-26 16:34:46.545283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.040 [2024-07-26 16:34:46.545315] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.040 [2024-07-26 16:34:46.545506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.040 [2024-07-26 16:34:46.545528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.040 [2024-07-26 16:34:46.545539] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.545550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.040 [2024-07-26 16:34:46.545579] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.545595] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.545606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.040 [2024-07-26 16:34:46.545624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.040 [2024-07-26 16:34:46.545668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.040 [2024-07-26 16:34:46.545858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.040 [2024-07-26 16:34:46.545879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.040 [2024-07-26 16:34:46.545891] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.545906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.040 [2024-07-26 16:34:46.545935] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.545952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.545963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.040 [2024-07-26 16:34:46.545981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.040 [2024-07-26 16:34:46.546026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.040 [2024-07-26 16:34:46.550099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.040 [2024-07-26 16:34:46.550123] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.040 [2024-07-26 16:34:46.550135] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.550146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.040 [2024-07-26 16:34:46.550176] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.550193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.550204] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.040 [2024-07-26 16:34:46.550222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.040 [2024-07-26 16:34:46.550254] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.040 [2024-07-26 16:34:46.550443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.040 [2024-07-26 16:34:46.550465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.040 [2024-07-26 16:34:46.550477] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.550488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.040 [2024-07-26 16:34:46.550511] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:29:27.040 00:29:27.040 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:27.040 [2024-07-26 16:34:46.649622] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:27.040 [2024-07-26 16:34:46.649714] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754531 ] 00:29:27.040 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.040 [2024-07-26 16:34:46.705569] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:27.040 [2024-07-26 16:34:46.705700] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:27.040 [2024-07-26 16:34:46.705721] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:27.040 [2024-07-26 16:34:46.705761] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:27.040 [2024-07-26 16:34:46.705788] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:27.040 [2024-07-26 16:34:46.709151] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:27.040 [2024-07-26 16:34:46.709241] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:29:27.040 [2024-07-26 16:34:46.717080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:27.040 [2024-07-26 16:34:46.717112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:27.040 [2024-07-26 16:34:46.717129] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:27.040 [2024-07-26 16:34:46.717139] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:27.040 [2024-07-26 16:34:46.717234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.717258] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.717280] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:27.040 [2024-07-26 16:34:46.717316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:27.040 [2024-07-26 16:34:46.717358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:27.040 [2024-07-26 16:34:46.725107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.040 [2024-07-26 16:34:46.725135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.040 [2024-07-26 16:34:46.725149] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.725164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:27.040 [2024-07-26 16:34:46.725191] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:27.040 [2024-07-26 16:34:46.725231] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:27.040 [2024-07-26 16:34:46.725255] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:27.040 [2024-07-26 16:34:46.725288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.040 [2024-07-26 16:34:46.725304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.725316] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:27.041 [2024-07-26 16:34:46.725341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.041 [2024-07-26 16:34:46.725393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:27.041 [2024-07-26 16:34:46.725645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.041 [2024-07-26 16:34:46.725668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.041 [2024-07-26 16:34:46.725681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.725693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:27.041 [2024-07-26 16:34:46.725719] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:27.041 [2024-07-26 16:34:46.725762] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:27.041 [2024-07-26 16:34:46.725784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.725797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.725809] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:27.041 [2024-07-26 16:34:46.725834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.041 [2024-07-26 16:34:46.725868] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:27.041 [2024-07-26 16:34:46.726077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.041 [2024-07-26 16:34:46.726098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.041 [2024-07-26 16:34:46.726110] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.726125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:27.041 [2024-07-26 16:34:46.726147] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:27.041 [2024-07-26 16:34:46.726173] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:27.041 [2024-07-26 16:34:46.726198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.726213] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.726225] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:27.041 [2024-07-26 16:34:46.726245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.041 [2024-07-26 16:34:46.726277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:27.041 [2024-07-26 16:34:46.726429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.041 [2024-07-26 16:34:46.726451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.041 [2024-07-26 16:34:46.726462] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.726473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:27.041 [2024-07-26 16:34:46.726489] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:27.041 [2024-07-26 16:34:46.726517] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.726533] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.726545] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:27.041 [2024-07-26 16:34:46.726564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.041 [2024-07-26 16:34:46.726618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:27.041 [2024-07-26 16:34:46.726829] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.041 [2024-07-26 16:34:46.726851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.041 [2024-07-26 16:34:46.726863] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.726879] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:27.041 [2024-07-26 16:34:46.726895] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:27.041 [2024-07-26 16:34:46.726910] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:27.041 [2024-07-26 16:34:46.726933] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:27.041 [2024-07-26 16:34:46.727051] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:27.041 [2024-07-26 16:34:46.727074] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:27.041 [2024-07-26 16:34:46.727110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.727127] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.727139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:27.041 [2024-07-26 16:34:46.727158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.041 [2024-07-26 16:34:46.727213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:27.041 [2024-07-26 16:34:46.727397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.041 [2024-07-26 16:34:46.727427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.041 [2024-07-26 16:34:46.727440] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.727451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:27.041 [2024-07-26 16:34:46.727465] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:27.041 [2024-07-26 16:34:46.727492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.727508] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.727520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:27.041 [2024-07-26 16:34:46.727539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.041 [2024-07-26 16:34:46.727586] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:27.041 [2024-07-26 16:34:46.727814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.041 [2024-07-26 16:34:46.727836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.041 [2024-07-26 16:34:46.727847] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.727858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:27.041 [2024-07-26 16:34:46.727878] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:27.041 [2024-07-26 16:34:46.727905] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:27.041 [2024-07-26 16:34:46.727928] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:27.041 [2024-07-26 16:34:46.727955] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:27.041 [2024-07-26 16:34:46.727988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.728003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:27.041 [2024-07-26 16:34:46.728023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.041 [2024-07-26 16:34:46.728055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:27.041 [2024-07-26 16:34:46.728289] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:27.041 [2024-07-26 16:34:46.728314] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:27.041 [2024-07-26 16:34:46.728327] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.728341] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:29:27.041 [2024-07-26 16:34:46.728370] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:27.041 [2024-07-26 16:34:46.728384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.728429] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.728447] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.728557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.041 [2024-07-26 16:34:46.728576] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.041 [2024-07-26 16:34:46.728587] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.728598] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:27.041 [2024-07-26 16:34:46.728631] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:27.041 [2024-07-26 16:34:46.728649] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:27.041 [2024-07-26 16:34:46.728672] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:27.041 [2024-07-26 16:34:46.728687] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:27.041 [2024-07-26 16:34:46.728700] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:27.041 [2024-07-26 16:34:46.728714] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:27.041 [2024-07-26 16:34:46.728753] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:27.041 [2024-07-26 16:34:46.728778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.728793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.041 [2024-07-26 16:34:46.728804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:27.041 [2024-07-26 16:34:46.728828] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:27.041 [2024-07-26 16:34:46.728864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:27.041 [2024-07-26 16:34:46.733069] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.042 [2024-07-26 16:34:46.733097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.042 [2024-07-26 16:34:46.733109] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.733120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:27.042 [2024-07-26 16:34:46.733141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.733155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.733167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:27.042 [2024-07-26 16:34:46.733194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.042 [2024-07-26 16:34:46.733216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.733228] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.733239] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:29:27.042 [2024-07-26 16:34:46.733254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.042 [2024-07-26 16:34:46.733269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.733280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.733290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:29:27.042 [2024-07-26 16:34:46.733306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.042 [2024-07-26 16:34:46.733320] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.733331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.733345] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.042 [2024-07-26 16:34:46.733362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.042 [2024-07-26 16:34:46.733376] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:27.042 [2024-07-26 16:34:46.733408] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:27.042 [2024-07-26 16:34:46.733429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.733443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:27.042 [2024-07-26 16:34:46.733462] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.042 [2024-07-26 16:34:46.733515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:27.042 [2024-07-26 16:34:46.733535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:29:27.042 [2024-07-26 16:34:46.733548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:29:27.042 [2024-07-26 16:34:46.733561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.042 [2024-07-26 16:34:46.733573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:27.042 [2024-07-26 16:34:46.733781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.042 [2024-07-26 16:34:46.733808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.042 [2024-07-26 16:34:46.733840] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.733852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:27.042 [2024-07-26 16:34:46.733869] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:27.042 [2024-07-26 16:34:46.733884] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:27.042 [2024-07-26 16:34:46.733906] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:27.042 [2024-07-26 16:34:46.733925] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:27.042 [2024-07-26 16:34:46.733942] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.733955] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.733966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:27.042 [2024-07-26 16:34:46.733985] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:27.042 [2024-07-26 16:34:46.734016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:27.042 [2024-07-26 16:34:46.734220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.042 [2024-07-26 16:34:46.734242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.042 [2024-07-26 16:34:46.734253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.734264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:27.042 [2024-07-26 16:34:46.734381] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:27.042 [2024-07-26 16:34:46.734425] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:27.042 [2024-07-26 16:34:46.734458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.734473] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:27.042 [2024-07-26 16:34:46.734492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.042 [2024-07-26 16:34:46.734527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:27.042 [2024-07-26 16:34:46.734760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:27.042 [2024-07-26 16:34:46.734781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:27.042 [2024-07-26 16:34:46.734792] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.734807] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:29:27.042 [2024-07-26 16:34:46.734821] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:27.042 [2024-07-26 16:34:46.734833] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.734854] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.734868] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.734886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.042 [2024-07-26 16:34:46.734901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.042 [2024-07-26 16:34:46.734912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.734923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:27.042 [2024-07-26 16:34:46.734973] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:27.042 [2024-07-26 16:34:46.735007] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:27.042 [2024-07-26 16:34:46.735049] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:27.042 [2024-07-26 16:34:46.735086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.735102] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:27.042 [2024-07-26 16:34:46.735122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.042 [2024-07-26 16:34:46.735160] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:27.042 [2024-07-26 16:34:46.735362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:27.042 [2024-07-26 16:34:46.735385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:27.042 [2024-07-26 16:34:46.735396] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.735407] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:29:27.042 [2024-07-26 16:34:46.735419] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:27.042 [2024-07-26 16:34:46.735430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.735457] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.735472] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.778078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.042 [2024-07-26 16:34:46.778108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.042 [2024-07-26 16:34:46.778121] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.778133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:27.042 [2024-07-26 16:34:46.778180] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:27.042 [2024-07-26 16:34:46.778227] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:27.042 [2024-07-26 16:34:46.778261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.778278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:27.042 [2024-07-26 16:34:46.778300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.042 [2024-07-26 16:34:46.778335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:27.042 [2024-07-26 16:34:46.778533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:27.042 [2024-07-26 16:34:46.778554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:27.042 [2024-07-26 16:34:46.778565] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:27.042 [2024-07-26 16:34:46.778576] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:29:27.042 [2024-07-26 16:34:46.778589] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:27.043 [2024-07-26 16:34:46.778600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.043 [2024-07-26 16:34:46.778629] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:27.043 [2024-07-26 16:34:46.778655] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:27.306 [2024-07-26 16:34:46.823108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.306 [2024-07-26 16:34:46.823154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.306 [2024-07-26 16:34:46.823167] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.306 [2024-07-26 16:34:46.823179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:27.306 [2024-07-26 16:34:46.823216] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:27.306 [2024-07-26 16:34:46.823243] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:27.306 [2024-07-26 16:34:46.823267] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:27.306 [2024-07-26 16:34:46.823286] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:27.306 [2024-07-26 16:34:46.823302] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:27.306 [2024-07-26 16:34:46.823317] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:27.306 [2024-07-26 16:34:46.823337] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:27.306 [2024-07-26 16:34:46.823350] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:27.306 [2024-07-26 16:34:46.823365] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:27.306 [2024-07-26 16:34:46.823444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.306 [2024-07-26 16:34:46.823463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:27.306 [2024-07-26 16:34:46.823484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.306 [2024-07-26 16:34:46.823510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.306 [2024-07-26 16:34:46.823524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.306 [2024-07-26 16:34:46.823551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:27.306 [2024-07-26 16:34:46.823572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.306 [2024-07-26 16:34:46.823611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:27.306 [2024-07-26 16:34:46.823651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:27.306 [2024-07-26 16:34:46.823850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.306 [2024-07-26 16:34:46.823871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.307 [2024-07-26 16:34:46.823883] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.823895] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:27.307 [2024-07-26 16:34:46.823936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.307 [2024-07-26 16:34:46.823954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.307 [2024-07-26 16:34:46.823965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.823975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:27.307 [2024-07-26 16:34:46.823999] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.824014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:27.307 [2024-07-26 16:34:46.824032] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.307 [2024-07-26 16:34:46.824089] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:27.307 [2024-07-26 16:34:46.824266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.307 [2024-07-26 16:34:46.824286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.307 [2024-07-26 16:34:46.824297] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.824308] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:27.307 [2024-07-26 16:34:46.824333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.824349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:27.307 [2024-07-26 16:34:46.824367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.307 [2024-07-26 16:34:46.824397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:27.307 [2024-07-26 16:34:46.824658] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.307 [2024-07-26 16:34:46.824681] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.307 [2024-07-26 16:34:46.824692] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.824704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:27.307 [2024-07-26 16:34:46.824729] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.824744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:27.307 [2024-07-26 16:34:46.824762] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.307 [2024-07-26 16:34:46.824808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:27.307 [2024-07-26 16:34:46.825034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.307 [2024-07-26 16:34:46.825055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.307 [2024-07-26 16:34:46.825079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.825091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:27.307 [2024-07-26 16:34:46.825134] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.825157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:27.307 [2024-07-26 16:34:46.825178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.307 [2024-07-26 16:34:46.825200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.825215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:27.307 [2024-07-26 16:34:46.825233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.307 [2024-07-26 16:34:46.825254] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.825274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:29:27.307 [2024-07-26 16:34:46.825292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.307 [2024-07-26 16:34:46.825317] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.825332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:29:27.307 [2024-07-26 16:34:46.825366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.307 [2024-07-26 16:34:46.825400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:27.307 [2024-07-26 16:34:46.825434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:27.307 [2024-07-26 16:34:46.825448] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:29:27.307 [2024-07-26 16:34:46.825460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:29:27.307 [2024-07-26 16:34:46.825872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:27.307 [2024-07-26 16:34:46.825896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:27.307 [2024-07-26 16:34:46.825909] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.825921] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:29:27.307 [2024-07-26 16:34:46.825933] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:29:27.307 [2024-07-26 16:34:46.825952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.825986] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826002] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:27.307 [2024-07-26 16:34:46.826032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:27.307 [2024-07-26 16:34:46.826042] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826053] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:29:27.307 [2024-07-26 16:34:46.826076] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:29:27.307 [2024-07-26 16:34:46.826089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826108] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826120] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826140] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:27.307 [2024-07-26 16:34:46.826156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:27.307 [2024-07-26 16:34:46.826166] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826180] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:29:27.307 [2024-07-26 16:34:46.826193] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:29:27.307 [2024-07-26 16:34:46.826204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826220] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826232] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:27.307 [2024-07-26 16:34:46.826260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:27.307 [2024-07-26 16:34:46.826270] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826280] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:29:27.307 [2024-07-26 16:34:46.826292] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:27.307 [2024-07-26 16:34:46.826303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826333] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826346] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.307 [2024-07-26 16:34:46.826378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.307 [2024-07-26 16:34:46.826388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:27.307 [2024-07-26 16:34:46.826450] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.307 [2024-07-26 16:34:46.826467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.307 [2024-07-26 16:34:46.826478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:27.307 [2024-07-26 16:34:46.826510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.307 [2024-07-26 16:34:46.826527] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.307 [2024-07-26 16:34:46.826536] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:29:27.307 [2024-07-26 16:34:46.826568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.307 [2024-07-26 16:34:46.826584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.307 [2024-07-26 16:34:46.826594] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.307 [2024-07-26 16:34:46.826604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:29:27.307 ===================================================== 00:29:27.307 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:27.307 ===================================================== 00:29:27.307 Controller Capabilities/Features 00:29:27.307 ================================ 00:29:27.308 Vendor ID: 8086 00:29:27.308 Subsystem Vendor ID: 8086 00:29:27.308 Serial Number: SPDK00000000000001 00:29:27.308 Model Number: SPDK bdev Controller 00:29:27.308 Firmware Version: 24.09 00:29:27.308 Recommended Arb Burst: 6 00:29:27.308 IEEE OUI Identifier: e4 d2 5c 00:29:27.308 Multi-path I/O 00:29:27.308 May have multiple subsystem ports: Yes 00:29:27.308 May have multiple controllers: Yes 00:29:27.308 Associated with SR-IOV VF: No 00:29:27.308 Max Data Transfer Size: 131072 00:29:27.308 Max Number of Namespaces: 32 00:29:27.308 Max Number of I/O Queues: 127 00:29:27.308 NVMe Specification Version (VS): 1.3 00:29:27.308 NVMe Specification Version (Identify): 1.3 00:29:27.308 Maximum Queue Entries: 128 00:29:27.308 Contiguous Queues Required: Yes 00:29:27.308 Arbitration Mechanisms Supported 00:29:27.308 Weighted Round Robin: Not Supported 00:29:27.308 Vendor Specific: Not Supported 00:29:27.308 Reset Timeout: 15000 ms 00:29:27.308 Doorbell Stride: 4 bytes 00:29:27.308 NVM Subsystem Reset: Not Supported 00:29:27.308 Command Sets Supported 00:29:27.308 NVM Command Set: Supported 00:29:27.308 Boot Partition: Not Supported 00:29:27.308 Memory Page Size Minimum: 4096 bytes 00:29:27.308 Memory Page Size Maximum: 4096 bytes 00:29:27.308 Persistent Memory Region: Not Supported 00:29:27.308 Optional Asynchronous Events Supported 00:29:27.308 Namespace Attribute Notices: Supported 00:29:27.308 Firmware Activation Notices: Not Supported 00:29:27.308 ANA Change Notices: Not Supported 00:29:27.308 PLE Aggregate Log Change Notices: Not Supported 00:29:27.308 LBA Status Info Alert Notices: Not Supported 00:29:27.308 EGE Aggregate Log Change Notices: Not Supported 00:29:27.308 Normal NVM Subsystem Shutdown event: Not Supported 00:29:27.308 Zone Descriptor Change Notices: Not Supported 00:29:27.308 Discovery Log Change Notices: Not Supported 00:29:27.308 Controller Attributes 00:29:27.308 128-bit Host Identifier: Supported 00:29:27.308 Non-Operational Permissive Mode: Not Supported 00:29:27.308 NVM Sets: Not Supported 00:29:27.308 Read Recovery Levels: Not Supported 00:29:27.308 Endurance Groups: Not Supported 00:29:27.308 Predictable Latency Mode: Not Supported 00:29:27.308 Traffic Based Keep ALive: Not Supported 00:29:27.308 Namespace Granularity: Not Supported 00:29:27.308 SQ Associations: Not Supported 00:29:27.308 UUID List: Not Supported 00:29:27.308 Multi-Domain Subsystem: Not Supported 00:29:27.308 Fixed Capacity Management: Not Supported 00:29:27.308 Variable Capacity Management: Not Supported 00:29:27.308 Delete Endurance Group: Not Supported 00:29:27.308 Delete NVM Set: Not Supported 00:29:27.308 Extended LBA Formats Supported: Not Supported 00:29:27.308 Flexible Data Placement Supported: Not Supported 00:29:27.308 00:29:27.308 Controller Memory Buffer Support 00:29:27.308 ================================ 00:29:27.308 Supported: No 00:29:27.308 00:29:27.308 Persistent Memory Region Support 00:29:27.308 ================================ 00:29:27.308 Supported: No 00:29:27.308 00:29:27.308 Admin Command Set Attributes 00:29:27.308 ============================ 00:29:27.308 Security Send/Receive: Not Supported 00:29:27.308 Format NVM: Not Supported 00:29:27.308 Firmware Activate/Download: Not Supported 00:29:27.308 Namespace Management: Not Supported 00:29:27.308 Device Self-Test: Not Supported 00:29:27.308 Directives: Not Supported 00:29:27.308 NVMe-MI: Not Supported 00:29:27.308 Virtualization Management: Not Supported 00:29:27.308 Doorbell Buffer Config: Not Supported 00:29:27.308 Get LBA Status Capability: Not Supported 00:29:27.308 Command & Feature Lockdown Capability: Not Supported 00:29:27.308 Abort Command Limit: 4 00:29:27.308 Async Event Request Limit: 4 00:29:27.308 Number of Firmware Slots: N/A 00:29:27.308 Firmware Slot 1 Read-Only: N/A 00:29:27.308 Firmware Activation Without Reset: N/A 00:29:27.308 Multiple Update Detection Support: N/A 00:29:27.308 Firmware Update Granularity: No Information Provided 00:29:27.308 Per-Namespace SMART Log: No 00:29:27.308 Asymmetric Namespace Access Log Page: Not Supported 00:29:27.308 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:27.308 Command Effects Log Page: Supported 00:29:27.308 Get Log Page Extended Data: Supported 00:29:27.308 Telemetry Log Pages: Not Supported 00:29:27.308 Persistent Event Log Pages: Not Supported 00:29:27.308 Supported Log Pages Log Page: May Support 00:29:27.308 Commands Supported & Effects Log Page: Not Supported 00:29:27.308 Feature Identifiers & Effects Log Page:May Support 00:29:27.308 NVMe-MI Commands & Effects Log Page: May Support 00:29:27.308 Data Area 4 for Telemetry Log: Not Supported 00:29:27.308 Error Log Page Entries Supported: 128 00:29:27.308 Keep Alive: Supported 00:29:27.308 Keep Alive Granularity: 10000 ms 00:29:27.308 00:29:27.308 NVM Command Set Attributes 00:29:27.308 ========================== 00:29:27.308 Submission Queue Entry Size 00:29:27.308 Max: 64 00:29:27.308 Min: 64 00:29:27.308 Completion Queue Entry Size 00:29:27.308 Max: 16 00:29:27.308 Min: 16 00:29:27.308 Number of Namespaces: 32 00:29:27.308 Compare Command: Supported 00:29:27.308 Write Uncorrectable Command: Not Supported 00:29:27.308 Dataset Management Command: Supported 00:29:27.308 Write Zeroes Command: Supported 00:29:27.308 Set Features Save Field: Not Supported 00:29:27.308 Reservations: Supported 00:29:27.308 Timestamp: Not Supported 00:29:27.308 Copy: Supported 00:29:27.308 Volatile Write Cache: Present 00:29:27.308 Atomic Write Unit (Normal): 1 00:29:27.308 Atomic Write Unit (PFail): 1 00:29:27.308 Atomic Compare & Write Unit: 1 00:29:27.308 Fused Compare & Write: Supported 00:29:27.308 Scatter-Gather List 00:29:27.308 SGL Command Set: Supported 00:29:27.308 SGL Keyed: Supported 00:29:27.308 SGL Bit Bucket Descriptor: Not Supported 00:29:27.308 SGL Metadata Pointer: Not Supported 00:29:27.308 Oversized SGL: Not Supported 00:29:27.308 SGL Metadata Address: Not Supported 00:29:27.308 SGL Offset: Supported 00:29:27.308 Transport SGL Data Block: Not Supported 00:29:27.308 Replay Protected Memory Block: Not Supported 00:29:27.308 00:29:27.308 Firmware Slot Information 00:29:27.308 ========================= 00:29:27.308 Active slot: 1 00:29:27.308 Slot 1 Firmware Revision: 24.09 00:29:27.308 00:29:27.308 00:29:27.308 Commands Supported and Effects 00:29:27.308 ============================== 00:29:27.308 Admin Commands 00:29:27.308 -------------- 00:29:27.308 Get Log Page (02h): Supported 00:29:27.308 Identify (06h): Supported 00:29:27.308 Abort (08h): Supported 00:29:27.308 Set Features (09h): Supported 00:29:27.308 Get Features (0Ah): Supported 00:29:27.308 Asynchronous Event Request (0Ch): Supported 00:29:27.308 Keep Alive (18h): Supported 00:29:27.308 I/O Commands 00:29:27.308 ------------ 00:29:27.308 Flush (00h): Supported LBA-Change 00:29:27.308 Write (01h): Supported LBA-Change 00:29:27.308 Read (02h): Supported 00:29:27.308 Compare (05h): Supported 00:29:27.308 Write Zeroes (08h): Supported LBA-Change 00:29:27.308 Dataset Management (09h): Supported LBA-Change 00:29:27.308 Copy (19h): Supported LBA-Change 00:29:27.308 00:29:27.308 Error Log 00:29:27.308 ========= 00:29:27.308 00:29:27.308 Arbitration 00:29:27.308 =========== 00:29:27.308 Arbitration Burst: 1 00:29:27.308 00:29:27.308 Power Management 00:29:27.308 ================ 00:29:27.308 Number of Power States: 1 00:29:27.308 Current Power State: Power State #0 00:29:27.308 Power State #0: 00:29:27.308 Max Power: 0.00 W 00:29:27.308 Non-Operational State: Operational 00:29:27.308 Entry Latency: Not Reported 00:29:27.308 Exit Latency: Not Reported 00:29:27.308 Relative Read Throughput: 0 00:29:27.308 Relative Read Latency: 0 00:29:27.308 Relative Write Throughput: 0 00:29:27.308 Relative Write Latency: 0 00:29:27.308 Idle Power: Not Reported 00:29:27.308 Active Power: Not Reported 00:29:27.308 Non-Operational Permissive Mode: Not Supported 00:29:27.308 00:29:27.308 Health Information 00:29:27.308 ================== 00:29:27.308 Critical Warnings: 00:29:27.308 Available Spare Space: OK 00:29:27.308 Temperature: OK 00:29:27.309 Device Reliability: OK 00:29:27.309 Read Only: No 00:29:27.309 Volatile Memory Backup: OK 00:29:27.309 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:27.309 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:27.309 Available Spare: 0% 00:29:27.309 Available Spare Threshold: 0% 00:29:27.309 Life Percentage Used:[2024-07-26 16:34:46.826826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.826846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:29:27.309 [2024-07-26 16:34:46.826866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-07-26 16:34:46.826899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:29:27.309 [2024-07-26 16:34:46.831078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.309 [2024-07-26 16:34:46.831104] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.309 [2024-07-26 16:34:46.831116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.831134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:29:27.309 [2024-07-26 16:34:46.831240] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:27.309 [2024-07-26 16:34:46.831272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:27.309 [2024-07-26 16:34:46.831295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.309 [2024-07-26 16:34:46.831311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:29:27.309 [2024-07-26 16:34:46.831325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.309 [2024-07-26 16:34:46.831353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:29:27.309 [2024-07-26 16:34:46.831368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.309 [2024-07-26 16:34:46.831380] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.309 [2024-07-26 16:34:46.831394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.309 [2024-07-26 16:34:46.831415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.831429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.831440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.309 [2024-07-26 16:34:46.831460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-07-26 16:34:46.831497] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.309 [2024-07-26 16:34:46.831695] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.309 [2024-07-26 16:34:46.831722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.309 [2024-07-26 16:34:46.831735] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.831747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.309 [2024-07-26 16:34:46.831769] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.831783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.831795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.309 [2024-07-26 16:34:46.831829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-07-26 16:34:46.831870] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.309 [2024-07-26 16:34:46.832100] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.309 [2024-07-26 16:34:46.832122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.309 [2024-07-26 16:34:46.832134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.832145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.309 [2024-07-26 16:34:46.832161] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:27.309 [2024-07-26 16:34:46.832184] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:27.309 [2024-07-26 16:34:46.832210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.832226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.832238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.309 [2024-07-26 16:34:46.832257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-07-26 16:34:46.832294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.309 [2024-07-26 16:34:46.832454] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.309 [2024-07-26 16:34:46.832476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.309 [2024-07-26 16:34:46.832487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.832498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.309 [2024-07-26 16:34:46.832526] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.832541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.832552] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.309 [2024-07-26 16:34:46.832575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-07-26 16:34:46.832622] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.309 [2024-07-26 16:34:46.832840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.309 [2024-07-26 16:34:46.832860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.309 [2024-07-26 16:34:46.832872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.832883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.309 [2024-07-26 16:34:46.832909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.832925] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.832936] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.309 [2024-07-26 16:34:46.832954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-07-26 16:34:46.832999] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.309 [2024-07-26 16:34:46.833211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.309 [2024-07-26 16:34:46.833233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.309 [2024-07-26 16:34:46.833244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.833256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.309 [2024-07-26 16:34:46.833282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.833298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.833309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.309 [2024-07-26 16:34:46.833327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-07-26 16:34:46.833373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.309 [2024-07-26 16:34:46.833587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.309 [2024-07-26 16:34:46.833608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.309 [2024-07-26 16:34:46.833620] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.833631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.309 [2024-07-26 16:34:46.833657] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.833672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.833684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.309 [2024-07-26 16:34:46.833702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-07-26 16:34:46.833752] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.309 [2024-07-26 16:34:46.833976] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.309 [2024-07-26 16:34:46.833998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.309 [2024-07-26 16:34:46.834009] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.834021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.309 [2024-07-26 16:34:46.834047] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.834072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.834085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.309 [2024-07-26 16:34:46.834103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-07-26 16:34:46.834134] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.309 [2024-07-26 16:34:46.834285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.309 [2024-07-26 16:34:46.834306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.309 [2024-07-26 16:34:46.834318] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.834329] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.309 [2024-07-26 16:34:46.834356] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.834371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.309 [2024-07-26 16:34:46.834382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.310 [2024-07-26 16:34:46.834400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-07-26 16:34:46.834445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.310 [2024-07-26 16:34:46.834668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.310 [2024-07-26 16:34:46.834688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.310 [2024-07-26 16:34:46.834699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.834710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.310 [2024-07-26 16:34:46.834736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.834751] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.834762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.310 [2024-07-26 16:34:46.834780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-07-26 16:34:46.834825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.310 [2024-07-26 16:34:46.835046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.310 [2024-07-26 16:34:46.835075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.310 [2024-07-26 16:34:46.835088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.835099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.310 [2024-07-26 16:34:46.835125] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.835141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.835151] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.310 [2024-07-26 16:34:46.835169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-07-26 16:34:46.835204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.310 [2024-07-26 16:34:46.835362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.310 [2024-07-26 16:34:46.835383] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.310 [2024-07-26 16:34:46.835394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.835405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.310 [2024-07-26 16:34:46.835438] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.835454] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.835465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.310 [2024-07-26 16:34:46.835490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-07-26 16:34:46.835536] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.310 [2024-07-26 16:34:46.835754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.310 [2024-07-26 16:34:46.835776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.310 [2024-07-26 16:34:46.835787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.835798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.310 [2024-07-26 16:34:46.835825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.835841] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.835852] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.310 [2024-07-26 16:34:46.835870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-07-26 16:34:46.835900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.310 [2024-07-26 16:34:46.836045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.310 [2024-07-26 16:34:46.836075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.310 [2024-07-26 16:34:46.836088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.836099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.310 [2024-07-26 16:34:46.836140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.836155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.836167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.310 [2024-07-26 16:34:46.836185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-07-26 16:34:46.836216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.310 [2024-07-26 16:34:46.836366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.310 [2024-07-26 16:34:46.836386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.310 [2024-07-26 16:34:46.836397] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.836409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.310 [2024-07-26 16:34:46.836436] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.836451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.836462] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.310 [2024-07-26 16:34:46.836479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-07-26 16:34:46.836530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.310 [2024-07-26 16:34:46.836764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.310 [2024-07-26 16:34:46.836786] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.310 [2024-07-26 16:34:46.836797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.836808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.310 [2024-07-26 16:34:46.836834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.836850] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.836860] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.310 [2024-07-26 16:34:46.836878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-07-26 16:34:46.836923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.310 [2024-07-26 16:34:46.837137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.310 [2024-07-26 16:34:46.837158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.310 [2024-07-26 16:34:46.837170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.837180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.310 [2024-07-26 16:34:46.837207] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.837222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.837233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.310 [2024-07-26 16:34:46.837251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-07-26 16:34:46.837282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.310 [2024-07-26 16:34:46.837429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.310 [2024-07-26 16:34:46.837449] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.310 [2024-07-26 16:34:46.837460] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.837471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.310 [2024-07-26 16:34:46.837497] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.837513] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.837524] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.310 [2024-07-26 16:34:46.837541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-07-26 16:34:46.837571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.310 [2024-07-26 16:34:46.837828] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.310 [2024-07-26 16:34:46.837849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.310 [2024-07-26 16:34:46.837860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.837871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.310 [2024-07-26 16:34:46.837898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.837913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.837924] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.310 [2024-07-26 16:34:46.837941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-07-26 16:34:46.837986] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.310 [2024-07-26 16:34:46.842092] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.310 [2024-07-26 16:34:46.842116] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.310 [2024-07-26 16:34:46.842128] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.842138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.310 [2024-07-26 16:34:46.842181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.842197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:27.310 [2024-07-26 16:34:46.842208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:27.311 [2024-07-26 16:34:46.842232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.311 [2024-07-26 16:34:46.842266] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:27.311 [2024-07-26 16:34:46.842425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:27.311 [2024-07-26 16:34:46.842447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:27.311 [2024-07-26 16:34:46.842458] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:27.311 [2024-07-26 16:34:46.842469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:27.311 [2024-07-26 16:34:46.842492] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 10 milliseconds 00:29:27.311 0% 00:29:27.311 Data Units Read: 0 00:29:27.311 Data Units Written: 0 00:29:27.311 Host Read Commands: 0 00:29:27.311 Host Write Commands: 0 00:29:27.311 Controller Busy Time: 0 minutes 00:29:27.311 Power Cycles: 0 00:29:27.311 Power On Hours: 0 hours 00:29:27.311 Unsafe Shutdowns: 0 00:29:27.311 Unrecoverable Media Errors: 0 00:29:27.311 Lifetime Error Log Entries: 0 00:29:27.311 Warning Temperature Time: 0 minutes 00:29:27.311 Critical Temperature Time: 0 minutes 00:29:27.311 00:29:27.311 Number of Queues 00:29:27.311 ================ 00:29:27.311 Number of I/O Submission Queues: 127 00:29:27.311 Number of I/O Completion Queues: 127 00:29:27.311 00:29:27.311 Active Namespaces 00:29:27.311 ================= 00:29:27.311 Namespace ID:1 00:29:27.311 Error Recovery Timeout: Unlimited 00:29:27.311 Command Set Identifier: NVM (00h) 00:29:27.311 Deallocate: Supported 00:29:27.311 Deallocated/Unwritten Error: Not Supported 00:29:27.311 Deallocated Read Value: Unknown 00:29:27.311 Deallocate in Write Zeroes: Not Supported 00:29:27.311 Deallocated Guard Field: 0xFFFF 00:29:27.311 Flush: Supported 00:29:27.311 Reservation: Supported 00:29:27.311 Namespace Sharing Capabilities: Multiple Controllers 00:29:27.311 Size (in LBAs): 131072 (0GiB) 00:29:27.311 Capacity (in LBAs): 131072 (0GiB) 00:29:27.311 Utilization (in LBAs): 131072 (0GiB) 00:29:27.311 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:27.311 EUI64: ABCDEF0123456789 00:29:27.311 UUID: 24b3adab-b617-49fb-aa3a-e4b433462fe8 00:29:27.311 Thin Provisioning: Not Supported 00:29:27.311 Per-NS Atomic Units: Yes 00:29:27.311 Atomic Boundary Size (Normal): 0 00:29:27.311 Atomic Boundary Size (PFail): 0 00:29:27.311 Atomic Boundary Offset: 0 00:29:27.311 Maximum Single Source Range Length: 65535 00:29:27.311 Maximum Copy Length: 65535 00:29:27.311 Maximum Source Range Count: 1 00:29:27.311 NGUID/EUI64 Never Reused: No 00:29:27.311 Namespace Write Protected: No 00:29:27.311 Number of LBA Formats: 1 00:29:27.311 Current LBA Format: LBA Format #00 00:29:27.311 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:27.311 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:27.311 rmmod nvme_tcp 00:29:27.311 rmmod nvme_fabrics 00:29:27.311 rmmod nvme_keyring 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 754250 ']' 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 754250 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 754250 ']' 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 754250 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 754250 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:27.311 16:34:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 754250' 00:29:27.311 killing process with pid 754250 00:29:27.311 16:34:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 754250 00:29:27.311 16:34:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 754250 00:29:28.689 16:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:28.689 16:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:28.689 16:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:28.689 16:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:28.689 16:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:28.689 16:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.689 16:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.689 16:34:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:31.284 00:29:31.284 real 0m7.649s 00:29:31.284 user 0m10.642s 00:29:31.284 sys 0m2.212s 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:31.284 ************************************ 00:29:31.284 END TEST nvmf_identify 00:29:31.284 ************************************ 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.284 ************************************ 00:29:31.284 START TEST nvmf_perf 00:29:31.284 ************************************ 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:31.284 * Looking for test storage... 00:29:31.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:31.284 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:31.285 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.285 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.285 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.285 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:31.285 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:31.285 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:31.285 16:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:33.187 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:33.188 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:33.188 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:33.188 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:33.188 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:33.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:29:33.188 00:29:33.188 --- 10.0.0.2 ping statistics --- 00:29:33.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.188 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:29:33.188 00:29:33.188 --- 10.0.0.1 ping statistics --- 00:29:33.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.188 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=756597 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 756597 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 756597 ']' 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:33.188 16:34:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:33.188 [2024-07-26 16:34:52.786451] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:33.188 [2024-07-26 16:34:52.786594] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.188 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.188 [2024-07-26 16:34:52.930036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:33.447 [2024-07-26 16:34:53.192339] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.447 [2024-07-26 16:34:53.192431] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.447 [2024-07-26 16:34:53.192460] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.447 [2024-07-26 16:34:53.192483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.447 [2024-07-26 16:34:53.192505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.447 [2024-07-26 16:34:53.192635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.447 [2024-07-26 16:34:53.192708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.447 [2024-07-26 16:34:53.192788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.447 [2024-07-26 16:34:53.192798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.013 16:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:34.013 16:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:29:34.013 16:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:34.013 16:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:34.013 16:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:34.271 16:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.271 16:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:34.271 16:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:37.556 16:34:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:37.556 16:34:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:37.556 16:34:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:29:37.556 16:34:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:37.814 16:34:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:37.814 16:34:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:29:37.814 16:34:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:37.814 16:34:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:37.814 16:34:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:38.073 [2024-07-26 16:34:57.767859] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.073 16:34:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.332 16:34:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:38.332 16:34:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:38.591 16:34:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:38.591 16:34:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:38.849 16:34:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:39.107 [2024-07-26 16:34:58.784655] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.107 16:34:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:39.366 16:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:29:39.366 16:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:39.366 16:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:39.366 16:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:40.743 Initializing NVMe Controllers 00:29:40.743 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:29:40.743 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:29:40.743 Initialization complete. Launching workers. 00:29:40.743 ======================================================== 00:29:40.743 Latency(us) 00:29:40.743 Device Information : IOPS MiB/s Average min max 00:29:40.743 PCIE (0000:88:00.0) NSID 1 from core 0: 74867.50 292.45 427.34 49.56 6291.33 00:29:40.743 ======================================================== 00:29:40.743 Total : 74867.50 292.45 427.34 49.56 6291.33 00:29:40.743 00:29:41.002 16:35:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:41.002 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.383 Initializing NVMe Controllers 00:29:42.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:42.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:42.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:42.383 Initialization complete. Launching workers. 00:29:42.383 ======================================================== 00:29:42.383 Latency(us) 00:29:42.383 Device Information : IOPS MiB/s Average min max 00:29:42.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 87.69 0.34 11402.84 253.51 45270.22 00:29:42.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 70.75 0.28 14244.62 4976.83 50890.00 00:29:42.383 ======================================================== 00:29:42.383 Total : 158.45 0.62 12671.81 253.51 50890.00 00:29:42.383 00:29:42.383 16:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:42.641 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.017 Initializing NVMe Controllers 00:29:44.017 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:44.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:44.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:44.017 Initialization complete. Launching workers. 00:29:44.017 ======================================================== 00:29:44.017 Latency(us) 00:29:44.017 Device Information : IOPS MiB/s Average min max 00:29:44.017 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5435.49 21.23 5887.44 1076.25 12584.14 00:29:44.017 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3788.31 14.80 8473.29 6400.30 19407.21 00:29:44.017 ======================================================== 00:29:44.017 Total : 9223.80 36.03 6949.47 1076.25 19407.21 00:29:44.017 00:29:44.017 16:35:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:44.017 16:35:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:44.017 16:35:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:44.017 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.551 Initializing NVMe Controllers 00:29:46.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:46.551 Controller IO queue size 128, less than required. 00:29:46.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.551 Controller IO queue size 128, less than required. 00:29:46.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:46.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:46.551 Initialization complete. Launching workers. 00:29:46.551 ======================================================== 00:29:46.551 Latency(us) 00:29:46.551 Device Information : IOPS MiB/s Average min max 00:29:46.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 799.89 199.97 170172.41 120096.23 341420.65 00:29:46.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 522.43 130.61 259007.19 122691.78 470925.44 00:29:46.551 ======================================================== 00:29:46.551 Total : 1322.32 330.58 205269.71 120096.23 470925.44 00:29:46.551 00:29:46.809 16:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:46.809 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.067 No valid NVMe controllers or AIO or URING devices found 00:29:47.067 Initializing NVMe Controllers 00:29:47.067 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.067 Controller IO queue size 128, less than required. 00:29:47.067 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:47.067 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:47.067 Controller IO queue size 128, less than required. 00:29:47.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:47.068 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:47.068 WARNING: Some requested NVMe devices were skipped 00:29:47.068 16:35:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:47.068 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.411 Initializing NVMe Controllers 00:29:50.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.411 Controller IO queue size 128, less than required. 00:29:50.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.411 Controller IO queue size 128, less than required. 00:29:50.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:50.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:50.411 Initialization complete. Launching workers. 00:29:50.411 00:29:50.411 ==================== 00:29:50.411 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:50.411 TCP transport: 00:29:50.411 polls: 11415 00:29:50.411 idle_polls: 4440 00:29:50.411 sock_completions: 6975 00:29:50.411 nvme_completions: 3827 00:29:50.411 submitted_requests: 5762 00:29:50.411 queued_requests: 1 00:29:50.411 00:29:50.411 ==================== 00:29:50.411 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:50.411 TCP transport: 00:29:50.411 polls: 14136 00:29:50.411 idle_polls: 6172 00:29:50.411 sock_completions: 7964 00:29:50.411 nvme_completions: 3845 00:29:50.411 submitted_requests: 5766 00:29:50.411 queued_requests: 1 00:29:50.411 ======================================================== 00:29:50.411 Latency(us) 00:29:50.411 Device Information : IOPS MiB/s Average min max 00:29:50.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 956.44 239.11 143459.20 84885.18 385795.04 00:29:50.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 960.94 240.23 139847.68 63862.56 458170.67 00:29:50.411 ======================================================== 00:29:50.411 Total : 1917.38 479.35 141649.20 63862.56 458170.67 00:29:50.411 00:29:50.411 16:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:50.411 16:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:50.411 16:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:50.411 16:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:50.411 16:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:53.698 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=e3050fa8-a57c-459e-a22f-4d6eb08c4471 00:29:53.698 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb e3050fa8-a57c-459e-a22f-4d6eb08c4471 00:29:53.698 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=e3050fa8-a57c-459e-a22f-4d6eb08c4471 00:29:53.698 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:53.698 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:53.698 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:53.698 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:53.956 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:53.956 { 00:29:53.956 "uuid": "e3050fa8-a57c-459e-a22f-4d6eb08c4471", 00:29:53.956 "name": "lvs_0", 00:29:53.956 "base_bdev": "Nvme0n1", 00:29:53.956 "total_data_clusters": 238234, 00:29:53.956 "free_clusters": 238234, 00:29:53.956 "block_size": 512, 00:29:53.956 "cluster_size": 4194304 00:29:53.956 } 00:29:53.956 ]' 00:29:53.956 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e3050fa8-a57c-459e-a22f-4d6eb08c4471") .free_clusters' 00:29:53.956 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:29:53.956 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e3050fa8-a57c-459e-a22f-4d6eb08c4471") .cluster_size' 00:29:53.956 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:53.956 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:29:53.956 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:29:53.956 952936 00:29:53.956 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:53.956 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:53.956 16:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e3050fa8-a57c-459e-a22f-4d6eb08c4471 lbd_0 20480 00:29:54.523 16:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=18536856-ab61-411e-8b19-7805c5efba4a 00:29:54.523 16:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 18536856-ab61-411e-8b19-7805c5efba4a lvs_n_0 00:29:55.089 16:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=ab607da8-987d-448e-a6bf-be2448f38e86 00:29:55.089 16:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb ab607da8-987d-448e-a6bf-be2448f38e86 00:29:55.089 16:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=ab607da8-987d-448e-a6bf-be2448f38e86 00:29:55.089 16:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:55.089 16:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:55.089 16:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:55.089 16:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:55.347 16:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:55.347 { 00:29:55.347 "uuid": "e3050fa8-a57c-459e-a22f-4d6eb08c4471", 00:29:55.347 "name": "lvs_0", 00:29:55.347 "base_bdev": "Nvme0n1", 00:29:55.347 "total_data_clusters": 238234, 00:29:55.347 "free_clusters": 233114, 00:29:55.347 "block_size": 512, 00:29:55.347 "cluster_size": 4194304 00:29:55.347 }, 00:29:55.347 { 00:29:55.347 "uuid": "ab607da8-987d-448e-a6bf-be2448f38e86", 00:29:55.347 "name": "lvs_n_0", 00:29:55.347 "base_bdev": "18536856-ab61-411e-8b19-7805c5efba4a", 00:29:55.347 "total_data_clusters": 5114, 00:29:55.347 "free_clusters": 5114, 00:29:55.347 "block_size": 512, 00:29:55.347 "cluster_size": 4194304 00:29:55.347 } 00:29:55.347 ]' 00:29:55.347 16:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="ab607da8-987d-448e-a6bf-be2448f38e86") .free_clusters' 00:29:55.605 16:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:29:55.605 16:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="ab607da8-987d-448e-a6bf-be2448f38e86") .cluster_size' 00:29:55.605 16:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:55.605 16:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:29:55.605 16:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:29:55.605 20456 00:29:55.605 16:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:55.605 16:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ab607da8-987d-448e-a6bf-be2448f38e86 lbd_nest_0 20456 00:29:55.864 16:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=2fbda93c-bfb5-4184-80ac-e3e44673a8d2 00:29:55.864 16:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:56.122 16:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:56.122 16:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 2fbda93c-bfb5-4184-80ac-e3e44673a8d2 00:29:56.380 16:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.638 16:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:56.638 16:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:56.638 16:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:56.638 16:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:56.638 16:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:56.638 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.845 Initializing NVMe Controllers 00:30:08.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:08.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:08.845 Initialization complete. Launching workers. 00:30:08.845 ======================================================== 00:30:08.845 Latency(us) 00:30:08.845 Device Information : IOPS MiB/s Average min max 00:30:08.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.69 0.02 20966.60 292.33 49250.14 00:30:08.845 ======================================================== 00:30:08.845 Total : 47.69 0.02 20966.60 292.33 49250.14 00:30:08.845 00:30:08.845 16:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:08.845 16:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:08.845 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.825 Initializing NVMe Controllers 00:30:18.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:18.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:18.825 Initialization complete. Launching workers. 00:30:18.825 ======================================================== 00:30:18.825 Latency(us) 00:30:18.825 Device Information : IOPS MiB/s Average min max 00:30:18.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.19 10.02 12479.71 5037.14 47915.54 00:30:18.825 ======================================================== 00:30:18.825 Total : 80.19 10.02 12479.71 5037.14 47915.54 00:30:18.825 00:30:18.825 16:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:18.825 16:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:18.825 16:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:18.825 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.879 Initializing NVMe Controllers 00:30:28.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:28.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:28.879 Initialization complete. Launching workers. 00:30:28.879 ======================================================== 00:30:28.879 Latency(us) 00:30:28.879 Device Information : IOPS MiB/s Average min max 00:30:28.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4651.20 2.27 6880.75 630.50 13508.27 00:30:28.879 ======================================================== 00:30:28.879 Total : 4651.20 2.27 6880.75 630.50 13508.27 00:30:28.879 00:30:28.879 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:28.879 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:28.879 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.862 Initializing NVMe Controllers 00:30:38.862 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:38.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:38.862 Initialization complete. Launching workers. 00:30:38.862 ======================================================== 00:30:38.862 Latency(us) 00:30:38.862 Device Information : IOPS MiB/s Average min max 00:30:38.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1970.71 246.34 16246.58 794.10 32559.95 00:30:38.862 ======================================================== 00:30:38.862 Total : 1970.71 246.34 16246.58 794.10 32559.95 00:30:38.862 00:30:38.862 16:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:38.862 16:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:38.862 16:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:38.862 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.839 Initializing NVMe Controllers 00:30:48.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:48.839 Controller IO queue size 128, less than required. 00:30:48.839 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:48.839 Initialization complete. Launching workers. 00:30:48.839 ======================================================== 00:30:48.839 Latency(us) 00:30:48.839 Device Information : IOPS MiB/s Average min max 00:30:48.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8559.02 4.18 14961.89 1782.05 52130.27 00:30:48.839 ======================================================== 00:30:48.839 Total : 8559.02 4.18 14961.89 1782.05 52130.27 00:30:48.839 00:30:48.839 16:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:48.839 16:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:48.839 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.044 Initializing NVMe Controllers 00:31:01.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.044 Controller IO queue size 128, less than required. 00:31:01.044 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:01.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:01.044 Initialization complete. Launching workers. 00:31:01.044 ======================================================== 00:31:01.044 Latency(us) 00:31:01.044 Device Information : IOPS MiB/s Average min max 00:31:01.044 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1178.50 147.31 109011.93 16143.94 239892.02 00:31:01.044 ======================================================== 00:31:01.044 Total : 1178.50 147.31 109011.93 16143.94 239892.02 00:31:01.044 00:31:01.044 16:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:01.044 16:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2fbda93c-bfb5-4184-80ac-e3e44673a8d2 00:31:01.044 16:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:01.044 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 18536856-ab61-411e-8b19-7805c5efba4a 00:31:01.044 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:01.303 rmmod nvme_tcp 00:31:01.303 rmmod nvme_fabrics 00:31:01.303 rmmod nvme_keyring 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 756597 ']' 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 756597 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 756597 ']' 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 756597 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 756597 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 756597' 00:31:01.303 killing process with pid 756597 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 756597 00:31:01.303 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 756597 00:31:03.876 16:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:03.876 16:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:03.876 16:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:03.876 16:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:03.876 16:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:03.876 16:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.876 16:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.876 16:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.783 16:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:05.783 00:31:05.783 real 1m34.992s 00:31:05.783 user 5m49.828s 00:31:05.783 sys 0m15.621s 00:31:05.783 16:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:05.783 16:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:05.783 ************************************ 00:31:05.783 END TEST nvmf_perf 00:31:05.783 ************************************ 00:31:05.783 16:36:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:05.783 16:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:05.783 16:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:05.783 16:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.041 ************************************ 00:31:06.041 START TEST nvmf_fio_host 00:31:06.041 ************************************ 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:06.041 * Looking for test storage... 00:31:06.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.041 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:06.042 16:36:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:07.942 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:07.942 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:07.942 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:07.942 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:07.942 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:07.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:31:07.943 00:31:07.943 --- 10.0.0.2 ping statistics --- 00:31:07.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.943 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:07.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:31:07.943 00:31:07.943 --- 10.0.0.1 ping statistics --- 00:31:07.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.943 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=769691 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 769691 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 769691 ']' 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:07.943 16:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.202 [2024-07-26 16:36:27.753923] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:08.202 [2024-07-26 16:36:27.754084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.202 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.202 [2024-07-26 16:36:27.892082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:08.460 [2024-07-26 16:36:28.152758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.460 [2024-07-26 16:36:28.152841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.460 [2024-07-26 16:36:28.152871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.460 [2024-07-26 16:36:28.152892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.460 [2024-07-26 16:36:28.152915] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.460 [2024-07-26 16:36:28.153045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.460 [2024-07-26 16:36:28.153127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:08.460 [2024-07-26 16:36:28.153162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.460 [2024-07-26 16:36:28.153175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:09.026 16:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:09.026 16:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:31:09.026 16:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:09.284 [2024-07-26 16:36:28.885779] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.284 16:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:09.284 16:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:09.284 16:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.284 16:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:09.542 Malloc1 00:31:09.542 16:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:09.800 16:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:10.058 16:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:10.316 [2024-07-26 16:36:29.975527] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.316 16:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:10.573 16:36:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:10.831 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:10.831 fio-3.35 00:31:10.831 Starting 1 thread 00:31:10.831 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.360 00:31:13.360 test: (groupid=0, jobs=1): err= 0: pid=770172: Fri Jul 26 16:36:32 2024 00:31:13.360 read: IOPS=6358, BW=24.8MiB/s (26.0MB/s)(49.9MiB/2009msec) 00:31:13.360 slat (usec): min=2, max=184, avg= 3.75, stdev= 2.50 00:31:13.360 clat (usec): min=3773, max=19564, avg=11071.87, stdev=895.64 00:31:13.360 lat (usec): min=3802, max=19567, avg=11075.62, stdev=895.52 00:31:13.360 clat percentiles (usec): 00:31:13.360 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:31:13.360 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:31:13.360 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:31:13.360 | 99.00th=[12911], 99.50th=[13304], 99.90th=[17171], 99.95th=[18220], 00:31:13.360 | 99.99th=[19530] 00:31:13.360 bw ( KiB/s): min=24424, max=26024, per=99.90%, avg=25408.00, stdev=688.00, samples=4 00:31:13.360 iops : min= 6106, max= 6506, avg=6352.00, stdev=172.00, samples=4 00:31:13.360 write: IOPS=6357, BW=24.8MiB/s (26.0MB/s)(49.9MiB/2009msec); 0 zone resets 00:31:13.360 slat (usec): min=3, max=164, avg= 3.90, stdev= 1.97 00:31:13.360 clat (usec): min=1858, max=16831, avg=8939.30, stdev=771.14 00:31:13.360 lat (usec): min=1876, max=16835, avg=8943.19, stdev=771.08 00:31:13.360 clat percentiles (usec): 00:31:13.360 | 1.00th=[ 7242], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8356], 00:31:13.360 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:31:13.360 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10028], 00:31:13.360 | 99.00th=[10552], 99.50th=[10814], 99.90th=[15401], 99.95th=[15664], 00:31:13.360 | 99.99th=[16057] 00:31:13.360 bw ( KiB/s): min=25088, max=25576, per=100.00%, avg=25430.00, stdev=230.61, samples=4 00:31:13.360 iops : min= 6272, max= 6394, avg=6357.50, stdev=57.65, samples=4 00:31:13.360 lat (msec) : 2=0.01%, 4=0.07%, 10=51.77%, 20=48.14% 00:31:13.360 cpu : usr=64.79%, sys=31.13%, ctx=69, majf=0, minf=1536 00:31:13.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:13.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:13.360 issued rwts: total=12774,12772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:13.360 00:31:13.360 Run status group 0 (all jobs): 00:31:13.360 READ: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=49.9MiB (52.3MB), run=2009-2009msec 00:31:13.360 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=49.9MiB (52.3MB), run=2009-2009msec 00:31:13.360 ----------------------------------------------------- 00:31:13.360 Suppressions used: 00:31:13.360 count bytes template 00:31:13.360 1 57 /usr/src/fio/parse.c 00:31:13.360 1 8 libtcmalloc_minimal.so 00:31:13.360 ----------------------------------------------------- 00:31:13.360 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:13.360 16:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:13.617 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:13.617 fio-3.35 00:31:13.617 Starting 1 thread 00:31:13.875 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.401 00:31:16.401 test: (groupid=0, jobs=1): err= 0: pid=770499: Fri Jul 26 16:36:35 2024 00:31:16.401 read: IOPS=6065, BW=94.8MiB/s (99.4MB/s)(195MiB/2054msec) 00:31:16.401 slat (usec): min=3, max=127, avg= 4.97, stdev= 2.19 00:31:16.401 clat (usec): min=3346, max=59859, avg=12520.80, stdev=4477.75 00:31:16.401 lat (usec): min=3350, max=59864, avg=12525.77, stdev=4477.75 00:31:16.401 clat percentiles (usec): 00:31:16.401 | 1.00th=[ 6063], 5.00th=[ 7439], 10.00th=[ 8356], 20.00th=[ 9765], 00:31:16.401 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12125], 60.00th=[12649], 00:31:16.401 | 70.00th=[13435], 80.00th=[14877], 90.00th=[16909], 95.00th=[17957], 00:31:16.401 | 99.00th=[20841], 99.50th=[53216], 99.90th=[57934], 99.95th=[58983], 00:31:16.401 | 99.99th=[58983] 00:31:16.401 bw ( KiB/s): min=41216, max=56224, per=50.62%, avg=49128.00, stdev=6335.48, samples=4 00:31:16.401 iops : min= 2576, max= 3514, avg=3070.50, stdev=395.97, samples=4 00:31:16.401 write: IOPS=3468, BW=54.2MiB/s (56.8MB/s)(100MiB/1849msec); 0 zone resets 00:31:16.401 slat (usec): min=32, max=236, avg=36.89, stdev= 7.10 00:31:16.401 clat (usec): min=9070, max=65965, avg=15456.17, stdev=5214.69 00:31:16.401 lat (usec): min=9104, max=65999, avg=15493.06, stdev=5214.65 00:31:16.401 clat percentiles (usec): 00:31:16.401 | 1.00th=[ 9503], 5.00th=[11076], 10.00th=[11863], 20.00th=[12780], 00:31:16.401 | 30.00th=[13435], 40.00th=[14091], 50.00th=[14615], 60.00th=[15401], 00:31:16.401 | 70.00th=[16450], 80.00th=[17433], 90.00th=[19006], 95.00th=[20055], 00:31:16.401 | 99.00th=[22938], 99.50th=[62653], 99.90th=[65274], 99.95th=[65799], 00:31:16.401 | 99.99th=[65799] 00:31:16.401 bw ( KiB/s): min=41184, max=59648, per=91.61%, avg=50848.00, stdev=7728.87, samples=4 00:31:16.401 iops : min= 2574, max= 3728, avg=3178.00, stdev=483.05, samples=4 00:31:16.401 lat (msec) : 4=0.11%, 10=15.35%, 20=81.79%, 50=2.08%, 100=0.67% 00:31:16.401 cpu : usr=72.78%, sys=23.81%, ctx=34, majf=0, minf=2075 00:31:16.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:31:16.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:16.401 issued rwts: total=12459,6414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:16.401 00:31:16.401 Run status group 0 (all jobs): 00:31:16.401 READ: bw=94.8MiB/s (99.4MB/s), 94.8MiB/s-94.8MiB/s (99.4MB/s-99.4MB/s), io=195MiB (204MB), run=2054-2054msec 00:31:16.401 WRITE: bw=54.2MiB/s (56.8MB/s), 54.2MiB/s-54.2MiB/s (56.8MB/s-56.8MB/s), io=100MiB (105MB), run=1849-1849msec 00:31:16.401 ----------------------------------------------------- 00:31:16.402 Suppressions used: 00:31:16.402 count bytes template 00:31:16.402 1 57 /usr/src/fio/parse.c 00:31:16.402 101 9696 /usr/src/fio/iolog.c 00:31:16.402 1 8 libtcmalloc_minimal.so 00:31:16.402 ----------------------------------------------------- 00:31:16.402 00:31:16.402 16:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:16.659 16:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:16.659 16:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:16.659 16:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:16.659 16:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:31:16.659 16:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:31:16.659 16:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:16.660 16:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:16.660 16:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:31:16.660 16:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:31:16.660 16:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:31:16.660 16:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:19.946 Nvme0n1 00:31:19.946 16:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:23.293 16:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=ab19f153-1d90-4afd-9bf5-33c3ed56a10d 00:31:23.294 16:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb ab19f153-1d90-4afd-9bf5-33c3ed56a10d 00:31:23.294 16:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=ab19f153-1d90-4afd-9bf5-33c3ed56a10d 00:31:23.294 16:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:23.294 16:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:23.294 16:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:23.294 16:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:23.294 16:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:23.294 { 00:31:23.294 "uuid": "ab19f153-1d90-4afd-9bf5-33c3ed56a10d", 00:31:23.294 "name": "lvs_0", 00:31:23.294 "base_bdev": "Nvme0n1", 00:31:23.294 "total_data_clusters": 930, 00:31:23.294 "free_clusters": 930, 00:31:23.294 "block_size": 512, 00:31:23.294 "cluster_size": 1073741824 00:31:23.294 } 00:31:23.294 ]' 00:31:23.294 16:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="ab19f153-1d90-4afd-9bf5-33c3ed56a10d") .free_clusters' 00:31:23.294 16:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:31:23.294 16:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="ab19f153-1d90-4afd-9bf5-33c3ed56a10d") .cluster_size' 00:31:23.294 16:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:31:23.294 16:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:31:23.294 16:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:31:23.294 952320 00:31:23.294 16:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:23.552 b95e1970-8c68-48a6-b593-3631b60fde5e 00:31:23.552 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:23.809 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:24.066 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:24.323 16:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:24.583 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:24.583 fio-3.35 00:31:24.583 Starting 1 thread 00:31:24.583 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.116 00:31:27.116 test: (groupid=0, jobs=1): err= 0: pid=771891: Fri Jul 26 16:36:46 2024 00:31:27.116 read: IOPS=4028, BW=15.7MiB/s (16.5MB/s)(31.7MiB/2012msec) 00:31:27.116 slat (usec): min=2, max=194, avg= 3.59, stdev= 3.34 00:31:27.116 clat (usec): min=1625, max=174321, avg=17311.98, stdev=13697.73 00:31:27.116 lat (usec): min=1630, max=174388, avg=17315.57, stdev=13698.36 00:31:27.116 clat percentiles (msec): 00:31:27.116 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:31:27.116 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 17], 00:31:27.116 | 70.00th=[ 17], 80.00th=[ 18], 90.00th=[ 18], 95.00th=[ 19], 00:31:27.116 | 99.00th=[ 23], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 176], 00:31:27.116 | 99.99th=[ 176] 00:31:27.116 bw ( KiB/s): min=11072, max=18112, per=99.73%, avg=16072.00, stdev=3351.58, samples=4 00:31:27.116 iops : min= 2768, max= 4528, avg=4018.00, stdev=837.89, samples=4 00:31:27.116 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(31.9MiB/2012msec); 0 zone resets 00:31:27.116 slat (usec): min=2, max=172, avg= 3.76, stdev= 2.41 00:31:27.116 clat (usec): min=465, max=171104, avg=14035.48, stdev=12880.26 00:31:27.116 lat (usec): min=470, max=171115, avg=14039.24, stdev=12880.91 00:31:27.116 clat percentiles (msec): 00:31:27.116 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 12], 00:31:27.116 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:31:27.116 | 70.00th=[ 14], 80.00th=[ 14], 90.00th=[ 15], 95.00th=[ 15], 00:31:27.116 | 99.00th=[ 20], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:31:27.116 | 99.99th=[ 171] 00:31:27.116 bw ( KiB/s): min=11712, max=17792, per=99.88%, avg=16202.00, stdev=2994.36, samples=4 00:31:27.116 iops : min= 2928, max= 4448, avg=4050.50, stdev=748.59, samples=4 00:31:27.116 lat (usec) : 500=0.01%, 1000=0.02% 00:31:27.116 lat (msec) : 2=0.02%, 4=0.06%, 10=0.81%, 20=98.00%, 50=0.31% 00:31:27.116 lat (msec) : 250=0.79% 00:31:27.116 cpu : usr=65.04%, sys=31.87%, ctx=91, majf=0, minf=1535 00:31:27.116 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:31:27.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:27.116 issued rwts: total=8106,8159,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:27.116 00:31:27.116 Run status group 0 (all jobs): 00:31:27.116 READ: bw=15.7MiB/s (16.5MB/s), 15.7MiB/s-15.7MiB/s (16.5MB/s-16.5MB/s), io=31.7MiB (33.2MB), run=2012-2012msec 00:31:27.116 WRITE: bw=15.8MiB/s (16.6MB/s), 15.8MiB/s-15.8MiB/s (16.6MB/s-16.6MB/s), io=31.9MiB (33.4MB), run=2012-2012msec 00:31:27.374 ----------------------------------------------------- 00:31:27.374 Suppressions used: 00:31:27.374 count bytes template 00:31:27.374 1 58 /usr/src/fio/parse.c 00:31:27.374 1 8 libtcmalloc_minimal.so 00:31:27.374 ----------------------------------------------------- 00:31:27.374 00:31:27.375 16:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:27.632 16:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:29.008 16:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=52ef4c90-2fe8-424c-898b-8a2e3ae9b037 00:31:29.008 16:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 52ef4c90-2fe8-424c-898b-8a2e3ae9b037 00:31:29.008 16:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=52ef4c90-2fe8-424c-898b-8a2e3ae9b037 00:31:29.008 16:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:29.008 16:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:29.008 16:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:29.008 16:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:29.008 16:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:29.008 { 00:31:29.008 "uuid": "ab19f153-1d90-4afd-9bf5-33c3ed56a10d", 00:31:29.008 "name": "lvs_0", 00:31:29.008 "base_bdev": "Nvme0n1", 00:31:29.008 "total_data_clusters": 930, 00:31:29.008 "free_clusters": 0, 00:31:29.008 "block_size": 512, 00:31:29.008 "cluster_size": 1073741824 00:31:29.008 }, 00:31:29.008 { 00:31:29.008 "uuid": "52ef4c90-2fe8-424c-898b-8a2e3ae9b037", 00:31:29.008 "name": "lvs_n_0", 00:31:29.008 "base_bdev": "b95e1970-8c68-48a6-b593-3631b60fde5e", 00:31:29.008 "total_data_clusters": 237847, 00:31:29.008 "free_clusters": 237847, 00:31:29.008 "block_size": 512, 00:31:29.008 "cluster_size": 4194304 00:31:29.008 } 00:31:29.008 ]' 00:31:29.008 16:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="52ef4c90-2fe8-424c-898b-8a2e3ae9b037") .free_clusters' 00:31:29.008 16:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:31:29.008 16:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="52ef4c90-2fe8-424c-898b-8a2e3ae9b037") .cluster_size' 00:31:29.008 16:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:29.008 16:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:31:29.008 16:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:31:29.008 951388 00:31:29.008 16:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:30.384 9a04cf84-259a-4cfa-9b0b-958583d90c7d 00:31:30.384 16:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:30.384 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:30.642 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:30.900 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:30.900 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:30.900 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:30.900 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.900 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:30.900 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:30.900 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:30.900 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:30.900 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.900 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:30.901 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:30.901 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:30.901 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:30.901 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:30.901 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:31:30.901 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:30.901 16:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:31.160 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:31.160 fio-3.35 00:31:31.160 Starting 1 thread 00:31:31.160 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.687 00:31:33.687 test: (groupid=0, jobs=1): err= 0: pid=772746: Fri Jul 26 16:36:53 2024 00:31:33.687 read: IOPS=4368, BW=17.1MiB/s (17.9MB/s)(34.3MiB/2012msec) 00:31:33.687 slat (usec): min=2, max=212, avg= 3.56, stdev= 3.30 00:31:33.687 clat (usec): min=6074, max=25588, avg=16160.76, stdev=1451.08 00:31:33.687 lat (usec): min=6093, max=25592, avg=16164.33, stdev=1450.95 00:31:33.687 clat percentiles (usec): 00:31:33.687 | 1.00th=[12649], 5.00th=[13960], 10.00th=[14484], 20.00th=[15008], 00:31:33.687 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16188], 60.00th=[16450], 00:31:33.687 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18220], 00:31:33.687 | 99.00th=[19268], 99.50th=[19792], 99.90th=[23987], 99.95th=[25560], 00:31:33.687 | 99.99th=[25560] 00:31:33.687 bw ( KiB/s): min=16520, max=18008, per=99.79%, avg=17436.00, stdev=653.67, samples=4 00:31:33.687 iops : min= 4130, max= 4502, avg=4359.00, stdev=163.42, samples=4 00:31:33.687 write: IOPS=4365, BW=17.1MiB/s (17.9MB/s)(34.3MiB/2012msec); 0 zone resets 00:31:33.687 slat (usec): min=2, max=163, avg= 3.75, stdev= 2.17 00:31:33.687 clat (usec): min=2953, max=23395, avg=12982.34, stdev=1235.28 00:31:33.687 lat (usec): min=2970, max=23398, avg=12986.09, stdev=1235.26 00:31:33.687 clat percentiles (usec): 00:31:33.687 | 1.00th=[10290], 5.00th=[11207], 10.00th=[11600], 20.00th=[11994], 00:31:33.687 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13042], 60.00th=[13304], 00:31:33.687 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:31:33.687 | 99.00th=[15664], 99.50th=[16319], 99.90th=[20841], 99.95th=[23200], 00:31:33.687 | 99.99th=[23462] 00:31:33.687 bw ( KiB/s): min=17408, max=17480, per=99.96%, avg=17456.00, stdev=32.66, samples=4 00:31:33.687 iops : min= 4352, max= 4370, avg=4364.00, stdev= 8.16, samples=4 00:31:33.687 lat (msec) : 4=0.01%, 10=0.40%, 20=99.29%, 50=0.31% 00:31:33.687 cpu : usr=62.95%, sys=33.76%, ctx=74, majf=0, minf=1534 00:31:33.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:31:33.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:33.687 issued rwts: total=8789,8784,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.687 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:33.687 00:31:33.687 Run status group 0 (all jobs): 00:31:33.687 READ: bw=17.1MiB/s (17.9MB/s), 17.1MiB/s-17.1MiB/s (17.9MB/s-17.9MB/s), io=34.3MiB (36.0MB), run=2012-2012msec 00:31:33.687 WRITE: bw=17.1MiB/s (17.9MB/s), 17.1MiB/s-17.1MiB/s (17.9MB/s-17.9MB/s), io=34.3MiB (36.0MB), run=2012-2012msec 00:31:33.687 ----------------------------------------------------- 00:31:33.687 Suppressions used: 00:31:33.687 count bytes template 00:31:33.687 1 58 /usr/src/fio/parse.c 00:31:33.687 1 8 libtcmalloc_minimal.so 00:31:33.687 ----------------------------------------------------- 00:31:33.687 00:31:33.687 16:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:33.944 16:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:33.944 16:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:39.208 16:36:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:39.208 16:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:41.774 16:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:41.774 16:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:43.677 rmmod nvme_tcp 00:31:43.677 rmmod nvme_fabrics 00:31:43.677 rmmod nvme_keyring 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 769691 ']' 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 769691 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 769691 ']' 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 769691 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 769691 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 769691' 00:31:43.677 killing process with pid 769691 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 769691 00:31:43.677 16:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 769691 00:31:45.577 16:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:45.577 16:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:45.577 16:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:45.577 16:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:45.577 16:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:45.577 16:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.577 16:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.577 16:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:47.481 00:31:47.481 real 0m41.292s 00:31:47.481 user 2m36.190s 00:31:47.481 sys 0m8.198s 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.481 ************************************ 00:31:47.481 END TEST nvmf_fio_host 00:31:47.481 ************************************ 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.481 ************************************ 00:31:47.481 START TEST nvmf_failover 00:31:47.481 ************************************ 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:47.481 * Looking for test storage... 00:31:47.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.481 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:31:47.482 16:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:49.389 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:49.389 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:49.389 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:49.389 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:49.389 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:49.390 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:49.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:49.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:31:49.390 00:31:49.390 --- 10.0.0.2 ping statistics --- 00:31:49.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.390 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:31:49.390 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:49.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:49.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:31:49.390 00:31:49.390 --- 10.0.0.1 ping statistics --- 00:31:49.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.390 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:31:49.390 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:49.390 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:31:49.390 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:49.390 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:49.390 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:49.390 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:49.390 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:49.390 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:49.390 16:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:49.390 16:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:49.390 16:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:49.390 16:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:49.390 16:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:49.390 16:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=776242 00:31:49.390 16:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:49.390 16:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 776242 00:31:49.390 16:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 776242 ']' 00:31:49.390 16:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.390 16:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:49.390 16:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.390 16:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:49.390 16:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:49.390 [2024-07-26 16:37:09.109674] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:49.390 [2024-07-26 16:37:09.109814] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.649 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.649 [2024-07-26 16:37:09.255179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:49.909 [2024-07-26 16:37:09.515277] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:49.909 [2024-07-26 16:37:09.515357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:49.909 [2024-07-26 16:37:09.515400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:49.909 [2024-07-26 16:37:09.515422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:49.909 [2024-07-26 16:37:09.515444] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:49.909 [2024-07-26 16:37:09.515589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.909 [2024-07-26 16:37:09.515745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.909 [2024-07-26 16:37:09.515755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:50.475 16:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:50.475 16:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:50.475 16:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:50.475 16:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:50.475 16:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:50.475 16:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:50.475 16:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:50.732 [2024-07-26 16:37:10.333162] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:50.732 16:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:50.990 Malloc0 00:31:50.990 16:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:51.246 16:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:51.503 16:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:51.761 [2024-07-26 16:37:11.428181] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:51.761 16:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:52.018 [2024-07-26 16:37:11.680915] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:52.018 16:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:52.276 [2024-07-26 16:37:11.925779] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:52.276 16:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=776540 00:31:52.276 16:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:52.276 16:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:52.276 16:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 776540 /var/tmp/bdevperf.sock 00:31:52.276 16:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 776540 ']' 00:31:52.276 16:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:52.276 16:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:52.276 16:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:52.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:52.276 16:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:52.276 16:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:53.208 16:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:53.208 16:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:53.208 16:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:53.772 NVMe0n1 00:31:53.772 16:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:54.029 00:31:54.029 16:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=776801 00:31:54.029 16:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:54.029 16:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:54.961 16:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:55.218 [2024-07-26 16:37:14.930078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.218 [2024-07-26 16:37:14.930629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.219 [2024-07-26 16:37:14.930646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.219 [2024-07-26 16:37:14.930663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.219 [2024-07-26 16:37:14.930679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.219 [2024-07-26 16:37:14.930696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.219 [2024-07-26 16:37:14.930712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.219 [2024-07-26 16:37:14.930729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.219 [2024-07-26 16:37:14.930745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.219 [2024-07-26 16:37:14.930761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.219 [2024-07-26 16:37:14.930778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.219 [2024-07-26 16:37:14.930794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:55.219 16:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:58.535 16:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:58.793 00:31:58.793 16:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:59.051 [2024-07-26 16:37:18.734966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.051 [2024-07-26 16:37:18.735091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.051 [2024-07-26 16:37:18.735124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.051 [2024-07-26 16:37:18.735144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.051 [2024-07-26 16:37:18.735163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.051 [2024-07-26 16:37:18.735181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.051 [2024-07-26 16:37:18.735199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.051 [2024-07-26 16:37:18.735217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.051 [2024-07-26 16:37:18.735235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.051 [2024-07-26 16:37:18.735253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.735986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 [2024-07-26 16:37:18.736328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:59.052 16:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:02.334 16:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:02.334 [2024-07-26 16:37:22.006009] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.334 16:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:03.268 16:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:03.527 [2024-07-26 16:37:23.275997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.527 [2024-07-26 16:37:23.276616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:32:03.785 16:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 776801 00:32:10.352 0 00:32:10.352 16:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 776540 00:32:10.352 16:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 776540 ']' 00:32:10.352 16:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 776540 00:32:10.352 16:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:10.352 16:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:10.352 16:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 776540 00:32:10.352 16:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:10.352 16:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:10.352 16:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 776540' 00:32:10.352 killing process with pid 776540 00:32:10.352 16:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 776540 00:32:10.352 16:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 776540 00:32:10.352 16:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:10.352 [2024-07-26 16:37:12.023346] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:10.352 [2024-07-26 16:37:12.023538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776540 ] 00:32:10.352 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.352 [2024-07-26 16:37:12.152249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.352 [2024-07-26 16:37:12.393648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.352 Running I/O for 15 seconds... 00:32:10.352 [2024-07-26 16:37:14.932178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.932967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.932988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.933008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.933029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.933049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.933099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.933122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.933145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.352 [2024-07-26 16:37:14.933165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.352 [2024-07-26 16:37:14.933188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.933968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.353 [2024-07-26 16:37:14.933988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.353 [2024-07-26 16:37:14.934883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.353 [2024-07-26 16:37:14.934904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.934924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.934946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.934966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.934987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.935957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.935978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.936000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.936020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.936057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.936087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.936111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.936132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.936155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.354 [2024-07-26 16:37:14.936180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.936231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.354 [2024-07-26 16:37:14.936259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55696 len:8 PRP1 0x0 PRP2 0x0 00:32:10.354 [2024-07-26 16:37:14.936280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.936308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.354 [2024-07-26 16:37:14.936335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.354 [2024-07-26 16:37:14.936354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55704 len:8 PRP1 0x0 PRP2 0x0 00:32:10.354 [2024-07-26 16:37:14.936390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.936411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.354 [2024-07-26 16:37:14.936428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.354 [2024-07-26 16:37:14.936444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55712 len:8 PRP1 0x0 PRP2 0x0 00:32:10.354 [2024-07-26 16:37:14.936463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.936481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.354 [2024-07-26 16:37:14.936497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.354 [2024-07-26 16:37:14.936513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55720 len:8 PRP1 0x0 PRP2 0x0 00:32:10.354 [2024-07-26 16:37:14.936531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.936550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.354 [2024-07-26 16:37:14.936565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.354 [2024-07-26 16:37:14.936581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55728 len:8 PRP1 0x0 PRP2 0x0 00:32:10.354 [2024-07-26 16:37:14.936599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.936618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.354 [2024-07-26 16:37:14.936633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.354 [2024-07-26 16:37:14.936649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55736 len:8 PRP1 0x0 PRP2 0x0 00:32:10.354 [2024-07-26 16:37:14.936668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.936686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.354 [2024-07-26 16:37:14.936702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.354 [2024-07-26 16:37:14.936718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55744 len:8 PRP1 0x0 PRP2 0x0 00:32:10.354 [2024-07-26 16:37:14.936736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.354 [2024-07-26 16:37:14.936754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.354 [2024-07-26 16:37:14.936770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.936786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55752 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.936808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.936827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.936844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.936861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55760 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.936879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.936898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.936914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.936929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55768 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.936947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.936966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.936981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.936997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55776 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.937015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.937033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.937048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.937086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55784 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.937108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.937128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.937144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.937161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55792 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.937180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.937199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.937215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.937231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55800 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.937250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.937269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.937285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.937302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55808 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.937321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.937340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.937360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.937378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55816 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.937397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.937417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.937433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.937465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55824 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.937485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.937503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.937519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.937535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55832 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.937553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.937572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.937588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.937604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55840 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.937621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.937640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.937655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.937671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55848 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.937689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.937707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.937723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.937739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55856 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.937757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.937775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.937791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.937807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55864 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.937825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.937843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.937859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.937875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55872 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.937893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.937915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.937931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.937947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55880 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.937965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.937983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.938000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.938016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55888 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.938034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.938052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.938093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.938112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55896 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.938131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.938151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.938168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.938185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55904 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.938204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.938223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.938240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.938256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55912 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.938275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.938308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.938325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.938343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55920 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.938361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.938396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.938413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.938429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55928 len:8 PRP1 0x0 PRP2 0x0 00:32:10.355 [2024-07-26 16:37:14.938448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.355 [2024-07-26 16:37:14.938466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.355 [2024-07-26 16:37:14.938483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.355 [2024-07-26 16:37:14.938506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55936 len:8 PRP1 0x0 PRP2 0x0 00:32:10.356 [2024-07-26 16:37:14.938529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:14.938549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.356 [2024-07-26 16:37:14.938565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.356 [2024-07-26 16:37:14.938582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55944 len:8 PRP1 0x0 PRP2 0x0 00:32:10.356 [2024-07-26 16:37:14.938600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:14.938618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.356 [2024-07-26 16:37:14.938635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.356 [2024-07-26 16:37:14.938652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55952 len:8 PRP1 0x0 PRP2 0x0 00:32:10.356 [2024-07-26 16:37:14.938670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:14.938689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.356 [2024-07-26 16:37:14.938704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.356 [2024-07-26 16:37:14.938720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55960 len:8 PRP1 0x0 PRP2 0x0 00:32:10.356 [2024-07-26 16:37:14.938739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:14.938757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.356 [2024-07-26 16:37:14.938773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.356 [2024-07-26 16:37:14.938790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55968 len:8 PRP1 0x0 PRP2 0x0 00:32:10.356 [2024-07-26 16:37:14.938808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:14.938827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.356 [2024-07-26 16:37:14.938842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.356 [2024-07-26 16:37:14.938858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55976 len:8 PRP1 0x0 PRP2 0x0 00:32:10.356 [2024-07-26 16:37:14.938876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:14.938895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.356 [2024-07-26 16:37:14.938911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.356 [2024-07-26 16:37:14.938927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55984 len:8 PRP1 0x0 PRP2 0x0 00:32:10.356 [2024-07-26 16:37:14.938945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:14.938963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.356 [2024-07-26 16:37:14.938979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.356 [2024-07-26 16:37:14.938995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55992 len:8 PRP1 0x0 PRP2 0x0 00:32:10.356 [2024-07-26 16:37:14.939013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:14.939031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.356 [2024-07-26 16:37:14.939047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.356 [2024-07-26 16:37:14.939090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56000 len:8 PRP1 0x0 PRP2 0x0 00:32:10.356 [2024-07-26 16:37:14.939112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:14.939133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.356 [2024-07-26 16:37:14.939150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.356 [2024-07-26 16:37:14.939166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55296 len:8 PRP1 0x0 PRP2 0x0 00:32:10.356 [2024-07-26 16:37:14.939185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:14.939204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.356 [2024-07-26 16:37:14.939221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.356 [2024-07-26 16:37:14.939238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55304 len:8 PRP1 0x0 PRP2 0x0 00:32:10.356 [2024-07-26 16:37:14.939257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:14.939544] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:32:10.356 [2024-07-26 16:37:14.939581] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:10.356 [2024-07-26 16:37:14.939649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.356 [2024-07-26 16:37:14.939677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:14.939707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.356 [2024-07-26 16:37:14.939727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:14.939748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.356 [2024-07-26 16:37:14.939768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:14.939789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.356 [2024-07-26 16:37:14.939808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:14.939828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.356 [2024-07-26 16:37:14.939917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:10.356 [2024-07-26 16:37:14.943841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.356 [2024-07-26 16:37:15.109382] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:10.356 [2024-07-26 16:37:18.737373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.356 [2024-07-26 16:37:18.737445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:18.737503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.356 [2024-07-26 16:37:18.737529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:18.737560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.356 [2024-07-26 16:37:18.737582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:18.737604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.356 [2024-07-26 16:37:18.737625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.356 [2024-07-26 16:37:18.737647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.357 [2024-07-26 16:37:18.737667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.357 [2024-07-26 16:37:18.737689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.357 [2024-07-26 16:37:18.737709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.357 [2024-07-26 16:37:18.737731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.357 [2024-07-26 16:37:18.737752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.357 [2024-07-26 16:37:18.737774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.357 [2024-07-26 16:37:18.737794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.357 [2024-07-26 16:37:18.737816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.357 [2024-07-26 16:37:18.737836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.357 [2024-07-26 16:37:18.737858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.357 [2024-07-26 16:37:18.737878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.357 [2024-07-26 16:37:18.737899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.357 [2024-07-26 16:37:18.737919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.357 [2024-07-26 16:37:18.737940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.357 [2024-07-26 16:37:18.737960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.357 [2024-07-26 16:37:18.737981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.357 [2024-07-26 16:37:18.738001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.357 [2024-07-26 16:37:18.738022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.357 [2024-07-26 16:37:18.738066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.357 [2024-07-26 16:37:18.738093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.358 [2024-07-26 16:37:18.738135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.358 [2024-07-26 16:37:18.738181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.358 [2024-07-26 16:37:18.738225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.358 [2024-07-26 16:37:18.738269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.358 [2024-07-26 16:37:18.738312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.358 [2024-07-26 16:37:18.738371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.358 [2024-07-26 16:37:18.738430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.358 [2024-07-26 16:37:18.738473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.358 [2024-07-26 16:37:18.738514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.358 [2024-07-26 16:37:18.738556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.358 [2024-07-26 16:37:18.738597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.358 [2024-07-26 16:37:18.738639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.358 [2024-07-26 16:37:18.738681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.358 [2024-07-26 16:37:18.738728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.738790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.738832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.738873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.738915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.738957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.738978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.738997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.358 [2024-07-26 16:37:18.739916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.358 [2024-07-26 16:37:18.739937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.739957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.739978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.739998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.740963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.740986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.741008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.741053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.741114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.741160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.741206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.741250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.741296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.359 [2024-07-26 16:37:18.741341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.359 [2024-07-26 16:37:18.741385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.741445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.741488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.741530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.741573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.741616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.741662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.741706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.359 [2024-07-26 16:37:18.741763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.359 [2024-07-26 16:37:18.741805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.359 [2024-07-26 16:37:18.741831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.741852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.741954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.360 [2024-07-26 16:37:18.741991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.742017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.360 [2024-07-26 16:37:18.742037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.742066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.360 [2024-07-26 16:37:18.742090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.742112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.360 [2024-07-26 16:37:18.742132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.742151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:10.360 [2024-07-26 16:37:18.742421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.742446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.742465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13544 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.742484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.742508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.742525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.742542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13552 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.742561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.742579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.742595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.742611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13560 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.742634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.742654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.742670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.742686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.742704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.742723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.742739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.742756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13576 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.742774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.742792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.742808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.742824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13584 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.742843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.742862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.742877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.742894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13592 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.742912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.742931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.742947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.742963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.742981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.743002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.743018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.743050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13608 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.743080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.743101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.743119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.743136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13616 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.743155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.743174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.743194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.743213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13624 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.743231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.743250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.743267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.743284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.743302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.743322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.743339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.743356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13640 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.743390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.743409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.743425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.743442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13648 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.743460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.743477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.743493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.743510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13656 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.743528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.743546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.743562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.743578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.743596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.743614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.743631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.743647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13672 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.743664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.743683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.743699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.743715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13680 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.743733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.743755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.743772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.743789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12904 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.743807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.360 [2024-07-26 16:37:18.743825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.360 [2024-07-26 16:37:18.743841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.360 [2024-07-26 16:37:18.743858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12912 len:8 PRP1 0x0 PRP2 0x0 00:32:10.360 [2024-07-26 16:37:18.743876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.743894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.743910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.743927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12920 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.743944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.743963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.743979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.743995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.744013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.744032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.744048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.744086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12936 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.744108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.744129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.744146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.744163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12944 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.744182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.744201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.744218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.744235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12952 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.744253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.744273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.744289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.744305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.744328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.744347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.744364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.744395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12968 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.744414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.744433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.744448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.744464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12976 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.744482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.744499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.744514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.744543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12984 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.744562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.744581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.744597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.744612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.744630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.744648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.744664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.744679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13000 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.744697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.744715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.744730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.744746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13008 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.744763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.744781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.744797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.744813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13016 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.744831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.744849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.744865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.744893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.744912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.744930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.744946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.744962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12664 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.744980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.744998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.745013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.745029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.745047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.745091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.745110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.745127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12680 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.745145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.745164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.745180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.745197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12688 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.745215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.745233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.745250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.745266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12696 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.745285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.745303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.745319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.745335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.745354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.745389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.745406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.745422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12712 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.745440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.745458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.745477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.745494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12720 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.745512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.745530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.361 [2024-07-26 16:37:18.745545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.361 [2024-07-26 16:37:18.745561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12728 len:8 PRP1 0x0 PRP2 0x0 00:32:10.361 [2024-07-26 16:37:18.745578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.361 [2024-07-26 16:37:18.745596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.745612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.745628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.745645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.745663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.745679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.745695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12744 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.745729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.745749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.745765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.745782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12752 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.745800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.745819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.745835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.745852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12760 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.745870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.745889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.745905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.745923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.745941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.745960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.745977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.745994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12776 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.746012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.746035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.746052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.746079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12784 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.746100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.746119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.746136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.746153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12792 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.746172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.746191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.746208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.746224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.746243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.746261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.746278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.746295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12808 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.746314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.746332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.746349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.746365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12816 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.746384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.746402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.746419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.746436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12824 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.746454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.746474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.746490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.746507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.746525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.746544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.746561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.746578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12840 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.746601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.746620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.746637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.746654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12848 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.746673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.746691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.746707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.746724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12856 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.746743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.746762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.746779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.746795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.746813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.746832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.746849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.746879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12872 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.746898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.746919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.362 [2024-07-26 16:37:18.746936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.362 [2024-07-26 16:37:18.746953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12880 len:8 PRP1 0x0 PRP2 0x0 00:32:10.362 [2024-07-26 16:37:18.746972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.362 [2024-07-26 16:37:18.746991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.747008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.747024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13032 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.747043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.747069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.747088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.747105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13040 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.747124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.747142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.747159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.747180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13048 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.747199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.747218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.747234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.747251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.747270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.747289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.747305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.747322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13064 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.747341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.747359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.747375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.747392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13072 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.747410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.747429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.747446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.747463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13080 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.747481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.747499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.747515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.747533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.747552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.747571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.747587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.747604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13096 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.747623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.747641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.747657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.747674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13104 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.747693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.747716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.747734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.747750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13112 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.747769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.747788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.747805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.747821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.747840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.747858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.747874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.747891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13128 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.747910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.747928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.747945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.747961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13136 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.747979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.747998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.748014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.748031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13144 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.748049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.748074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.748092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.748109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.748128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.748147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.748163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.748180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13160 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.748198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.748216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.748232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.748249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13168 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.748271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.748292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.748309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.748326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13176 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.748344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.748363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.748380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.748397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.748415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.748434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.748450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.748467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13192 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.748486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.748505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.748521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.748537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13200 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.748556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.748575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.748591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.363 [2024-07-26 16:37:18.748608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13208 len:8 PRP1 0x0 PRP2 0x0 00:32:10.363 [2024-07-26 16:37:18.748626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.363 [2024-07-26 16:37:18.748645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.363 [2024-07-26 16:37:18.748662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.748678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.748697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.748715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.748732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.748748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13224 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.748767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.748786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.748802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.748829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13232 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.748849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.748869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.748885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.748901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13240 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.748920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.748939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.748955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.748972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.748993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.749012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.749029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.749046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13256 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.749072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.749092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.749110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.749127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13264 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.749146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.749165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.749183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.749245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13272 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.749266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.749288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.749305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.749322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.749341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.749360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.749377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.749394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13288 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.749413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.749432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.749453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.749471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13296 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.749490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.749510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.749526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.749544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13304 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.749563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.749582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.749599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.749616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.749635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.749655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.749672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.749689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13320 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.749708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.749728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.749745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.749762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13328 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.749781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.749800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.749816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.749834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13336 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.749853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.749872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.749889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.749906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.749925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.749944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.749961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.749977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13352 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.749997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.750019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.750037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.750054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13360 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.750082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.750102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.750120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.750137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13368 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.750156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.750175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.750192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.750209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.750228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.750248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.750264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.750281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13384 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.750300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.750319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.750336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.750353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13392 len:8 PRP1 0x0 PRP2 0x0 00:32:10.364 [2024-07-26 16:37:18.750372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.364 [2024-07-26 16:37:18.750390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.364 [2024-07-26 16:37:18.750407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.364 [2024-07-26 16:37:18.750424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13400 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.750443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.750461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.750477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.750494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.750513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.750532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.750548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.750564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13416 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.750586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.750606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.750623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.750640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13424 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.750658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.750683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.750700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.750717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13432 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.750735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.750754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.750770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.750787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.750805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.750823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.750839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.750856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13448 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.750874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.750893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.750909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.750925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13456 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.750944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.750962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.750978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.750995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13464 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.751013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.751031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.751047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.751070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12888 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.751091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.751111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.751127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.751147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.751167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.751186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.751202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.751218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.751236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.751260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.751278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.751295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13480 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.751313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.751331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.751347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.751364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13488 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.751382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.751401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.751417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.751434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13496 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.751452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.751471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.751487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.751503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.751521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.751539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.751555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.751584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13512 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.751604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.751624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.751640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.751657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13520 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.751675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.751697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.751714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.751731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13528 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.751750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.751769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.365 [2024-07-26 16:37:18.751785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.365 [2024-07-26 16:37:18.751802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:8 PRP1 0x0 PRP2 0x0 00:32:10.365 [2024-07-26 16:37:18.751820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:18.752112] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3180 was disconnected and freed. reset controller. 00:32:10.365 [2024-07-26 16:37:18.752143] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:10.365 [2024-07-26 16:37:18.752166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.365 [2024-07-26 16:37:18.752241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:10.365 [2024-07-26 16:37:18.756124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.365 [2024-07-26 16:37:18.802634] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:10.365 [2024-07-26 16:37:23.275826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.365 [2024-07-26 16:37:23.275902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:23.275940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.365 [2024-07-26 16:37:23.275963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:23.275985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.365 [2024-07-26 16:37:23.276006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.365 [2024-07-26 16:37:23.276028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.365 [2024-07-26 16:37:23.276050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.276084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:10.366 [2024-07-26 16:37:23.278034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.278945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.278967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.279002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.279025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.279045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.279090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.279115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.279138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.279158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.279181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.279201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.279224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.279254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.279277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.279297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.279319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.279339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.279362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.279397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.279427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.279446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.279473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.279494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.279516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.279535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.279557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.279577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.279600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.279619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.279641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.366 [2024-07-26 16:37:23.279660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.366 [2024-07-26 16:37:23.279681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.279701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.279723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.279748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.279769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.279788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.279809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.279829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.279851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.279870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.279891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.279910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.279932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.279952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.279973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.279996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.367 [2024-07-26 16:37:23.280687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.280974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.280995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.281015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.281037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.281056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.281102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.281127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.281150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.281170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.281192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.281213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.281238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.281259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.281281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.281301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.281324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.281344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.281382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.281402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.281424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.367 [2024-07-26 16:37:23.281461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.367 [2024-07-26 16:37:23.281483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.281504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.281527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.281547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.281569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.281589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.281612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.281632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.281655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.281675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.281702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.281723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.281746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.281766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.281788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.281809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.281831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.281852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.281874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.281895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.281917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.281937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.281960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.281980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.282022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.282088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.282136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.282180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.282225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.282272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.282319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.282364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.282423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.282466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.282509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.282552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.282595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.282638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.368 [2024-07-26 16:37:23.282681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.282724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.282767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.282811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.282858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.282902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.282944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.282966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.282986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.283009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.283030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.283077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.283100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.283123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.283144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.283167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.283195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.283220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.283241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.368 [2024-07-26 16:37:23.283265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.368 [2024-07-26 16:37:23.283286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.283310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.369 [2024-07-26 16:37:23.283343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.283384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.369 [2024-07-26 16:37:23.283406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.283428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.369 [2024-07-26 16:37:23.283449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.283475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.369 [2024-07-26 16:37:23.283496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.283520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.369 [2024-07-26 16:37:23.283540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.283562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.369 [2024-07-26 16:37:23.283583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.283605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.369 [2024-07-26 16:37:23.283626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.283649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.369 [2024-07-26 16:37:23.283669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.283691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.369 [2024-07-26 16:37:23.283712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.283735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.369 [2024-07-26 16:37:23.283755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.283795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.369 [2024-07-26 16:37:23.283820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111904 len:8 PRP1 0x0 PRP2 0x0 00:32:10.369 [2024-07-26 16:37:23.283840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.283868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.369 [2024-07-26 16:37:23.283886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.369 [2024-07-26 16:37:23.283904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111912 len:8 PRP1 0x0 PRP2 0x0 00:32:10.369 [2024-07-26 16:37:23.283923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.283948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.369 [2024-07-26 16:37:23.283965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.369 [2024-07-26 16:37:23.283981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111920 len:8 PRP1 0x0 PRP2 0x0 00:32:10.369 [2024-07-26 16:37:23.283999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.284018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.369 [2024-07-26 16:37:23.284034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.369 [2024-07-26 16:37:23.284074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111928 len:8 PRP1 0x0 PRP2 0x0 00:32:10.369 [2024-07-26 16:37:23.284098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.284119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.369 [2024-07-26 16:37:23.284136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.369 [2024-07-26 16:37:23.284153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111936 len:8 PRP1 0x0 PRP2 0x0 00:32:10.369 [2024-07-26 16:37:23.284172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.369 [2024-07-26 16:37:23.284445] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3900 was disconnected and freed. reset controller. 00:32:10.369 [2024-07-26 16:37:23.284475] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:10.369 [2024-07-26 16:37:23.284498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.369 [2024-07-26 16:37:23.288494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.369 [2024-07-26 16:37:23.288552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:10.369 [2024-07-26 16:37:23.340579] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:10.369 00:32:10.369 Latency(us) 00:32:10.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.369 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:10.369 Verification LBA range: start 0x0 length 0x4000 00:32:10.369 NVMe0n1 : 15.00 6007.66 23.47 550.56 0.00 19482.75 1013.38 31263.10 00:32:10.369 =================================================================================================================== 00:32:10.369 Total : 6007.66 23.47 550.56 0.00 19482.75 1013.38 31263.10 00:32:10.369 Received shutdown signal, test time was about 15.000000 seconds 00:32:10.369 00:32:10.369 Latency(us) 00:32:10.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.369 =================================================================================================================== 00:32:10.369 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:10.369 16:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:10.369 16:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:10.369 16:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:10.369 16:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=778667 00:32:10.369 16:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:10.369 16:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 778667 /var/tmp/bdevperf.sock 00:32:10.369 16:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 778667 ']' 00:32:10.369 16:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:10.369 16:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:10.369 16:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:10.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:10.369 16:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:10.369 16:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:11.304 16:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:11.304 16:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:11.304 16:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:11.562 [2024-07-26 16:37:31.168313] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:11.562 16:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:11.819 [2024-07-26 16:37:31.433143] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:11.819 16:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:12.384 NVMe0n1 00:32:12.384 16:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:12.642 00:32:12.642 16:37:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:13.207 00:32:13.207 16:37:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:13.207 16:37:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:13.207 16:37:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:13.465 16:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:16.739 16:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:16.739 16:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:16.739 16:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=779465 00:32:16.739 16:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:16.739 16:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 779465 00:32:18.119 0 00:32:18.119 16:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:18.119 [2024-07-26 16:37:29.937946] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:18.119 [2024-07-26 16:37:29.938129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778667 ] 00:32:18.119 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.119 [2024-07-26 16:37:30.073805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.119 [2024-07-26 16:37:30.313967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.119 [2024-07-26 16:37:33.155340] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:18.119 [2024-07-26 16:37:33.155495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.119 [2024-07-26 16:37:33.155534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.119 [2024-07-26 16:37:33.155564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.119 [2024-07-26 16:37:33.155585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.119 [2024-07-26 16:37:33.155607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.119 [2024-07-26 16:37:33.155628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.119 [2024-07-26 16:37:33.155650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.119 [2024-07-26 16:37:33.155671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.119 [2024-07-26 16:37:33.155692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:18.119 [2024-07-26 16:37:33.155801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:18.119 [2024-07-26 16:37:33.155852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:18.119 [2024-07-26 16:37:33.204946] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:18.119 Running I/O for 1 seconds... 00:32:18.119 00:32:18.119 Latency(us) 00:32:18.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.119 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:18.119 Verification LBA range: start 0x0 length 0x4000 00:32:18.120 NVMe0n1 : 1.02 6064.58 23.69 0.00 0.00 20975.96 3568.07 19223.89 00:32:18.120 =================================================================================================================== 00:32:18.120 Total : 6064.58 23.69 0.00 0.00 20975.96 3568.07 19223.89 00:32:18.120 16:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:18.120 16:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:18.120 16:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:18.377 16:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:18.377 16:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:18.635 16:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:18.892 16:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:22.169 16:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:22.169 16:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:22.169 16:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 778667 00:32:22.169 16:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 778667 ']' 00:32:22.169 16:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 778667 00:32:22.169 16:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:22.169 16:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:22.169 16:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 778667 00:32:22.169 16:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:22.169 16:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:22.169 16:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 778667' 00:32:22.169 killing process with pid 778667 00:32:22.169 16:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 778667 00:32:22.169 16:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 778667 00:32:23.102 16:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:23.102 16:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:23.685 rmmod nvme_tcp 00:32:23.685 rmmod nvme_fabrics 00:32:23.685 rmmod nvme_keyring 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 776242 ']' 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 776242 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 776242 ']' 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 776242 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 776242 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 776242' 00:32:23.685 killing process with pid 776242 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 776242 00:32:23.685 16:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 776242 00:32:25.063 16:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:25.063 16:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:25.063 16:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:25.063 16:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:25.063 16:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:25.063 16:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.063 16:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.063 16:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.965 16:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:26.965 00:32:26.965 real 0m39.779s 00:32:26.965 user 2m19.042s 00:32:26.965 sys 0m6.195s 00:32:26.965 16:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:26.965 16:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:26.965 ************************************ 00:32:26.965 END TEST nvmf_failover 00:32:26.965 ************************************ 00:32:26.965 16:37:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:26.965 16:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:26.965 16:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:26.965 16:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.224 ************************************ 00:32:27.224 START TEST nvmf_host_discovery 00:32:27.224 ************************************ 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:27.224 * Looking for test storage... 00:32:27.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.224 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:32:27.225 16:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.126 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.126 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:32:29.126 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:29.126 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:29.126 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:29.126 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:29.126 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:29.126 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:32:29.126 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:29.126 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:32:29.126 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:32:29.126 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:32:29.126 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:32:29.126 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:29.127 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:29.127 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:29.127 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:29.127 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:29.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:32:29.127 00:32:29.127 --- 10.0.0.2 ping statistics --- 00:32:29.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.127 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:32:29.127 00:32:29.127 --- 10.0.0.1 ping statistics --- 00:32:29.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.127 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=782316 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 782316 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 782316 ']' 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:29.127 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.128 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:29.128 16:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.128 [2024-07-26 16:37:48.884723] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:29.128 [2024-07-26 16:37:48.884888] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:29.386 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.386 [2024-07-26 16:37:49.024051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.644 [2024-07-26 16:37:49.279644] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:29.644 [2024-07-26 16:37:49.279722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:29.644 [2024-07-26 16:37:49.279750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:29.644 [2024-07-26 16:37:49.279775] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:29.644 [2024-07-26 16:37:49.279797] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:29.644 [2024-07-26 16:37:49.279844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.210 [2024-07-26 16:37:49.818031] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.210 [2024-07-26 16:37:49.826308] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:30.210 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.211 null0 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.211 null1 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=782469 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 782469 /tmp/host.sock 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 782469 ']' 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:30.211 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:30.211 16:37:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.211 [2024-07-26 16:37:49.939554] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:30.211 [2024-07-26 16:37:49.939723] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782469 ] 00:32:30.469 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.469 [2024-07-26 16:37:50.076242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.727 [2024-07-26 16:37:50.326893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:31.292 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.293 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.293 16:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:31.293 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:31.293 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.293 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:31.293 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:31.293 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.293 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.293 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.293 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:31.293 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:31.293 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.293 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:31.293 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.293 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:31.293 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.551 [2024-07-26 16:37:51.137977] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:31.551 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.809 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:32:31.809 16:37:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:32.374 [2024-07-26 16:37:51.911284] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:32.374 [2024-07-26 16:37:51.911328] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:32.374 [2024-07-26 16:37:51.911390] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:32.374 [2024-07-26 16:37:51.997693] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:32.631 [2024-07-26 16:37:52.181824] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:32.631 [2024-07-26 16:37:52.181864] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:32.631 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:32.632 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:32.632 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:32.632 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:32.632 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.632 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.632 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:32.632 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:32.632 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:32.632 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.889 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:32.890 16:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:33.824 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:33.824 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:33.824 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:33.824 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:33.824 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:33.824 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.824 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.112 [2024-07-26 16:37:53.630871] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:34.112 [2024-07-26 16:37:53.631747] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:34.112 [2024-07-26 16:37:53.631818] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.112 [2024-07-26 16:37:53.718589] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:34.112 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.113 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:34.113 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.113 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:34.113 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:34.113 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.113 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:34.113 16:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:34.113 [2024-07-26 16:37:53.826603] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:34.113 [2024-07-26 16:37:53.826643] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:34.113 [2024-07-26 16:37:53.826664] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:35.045 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.304 [2024-07-26 16:37:54.851127] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:35.304 [2024-07-26 16:37:54.851189] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:35.304 [2024-07-26 16:37:54.855532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.304 [2024-07-26 16:37:54.855615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.304 [2024-07-26 16:37:54.855660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.304 [2024-07-26 16:37:54.855688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.304 [2024-07-26 16:37:54.855711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.304 [2024-07-26 16:37:54.855732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.304 [2024-07-26 16:37:54.855755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.304 [2024-07-26 16:37:54.855776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.304 [2024-07-26 16:37:54.855797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:35.304 [2024-07-26 16:37:54.865526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:35.304 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.304 [2024-07-26 16:37:54.875575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:35.304 [2024-07-26 16:37:54.875891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.304 [2024-07-26 16:37:54.875931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:35.304 [2024-07-26 16:37:54.875958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:35.304 [2024-07-26 16:37:54.875993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:35.304 [2024-07-26 16:37:54.876028] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:35.304 [2024-07-26 16:37:54.876087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:35.304 [2024-07-26 16:37:54.876136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:35.304 [2024-07-26 16:37:54.876186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.304 [2024-07-26 16:37:54.885693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:35.304 [2024-07-26 16:37:54.885960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.304 [2024-07-26 16:37:54.885997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:35.304 [2024-07-26 16:37:54.886027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:35.304 [2024-07-26 16:37:54.886080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:35.304 [2024-07-26 16:37:54.886114] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:35.304 [2024-07-26 16:37:54.886136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:35.304 [2024-07-26 16:37:54.886156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:35.305 [2024-07-26 16:37:54.886185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.305 [2024-07-26 16:37:54.895799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.305 [2024-07-26 16:37:54.896093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.305 [2024-07-26 16:37:54.896133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:35.305 [2024-07-26 16:37:54.896158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:35.305 [2024-07-26 16:37:54.896193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:35.305 (9): Bad file descriptor 00:32:35.305 [2024-07-26 16:37:54.896229] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:35.305 [2024-07-26 16:37:54.896251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:35.305 [2024-07-26 16:37:54.896271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:35.305 [2024-07-26 16:37:54.896301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:35.305 [2024-07-26 16:37:54.905910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:35.305 [2024-07-26 16:37:54.906153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.305 [2024-07-26 16:37:54.906192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:35.305 [2024-07-26 16:37:54.906218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:35.305 [2024-07-26 16:37:54.906257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:35.305 [2024-07-26 16:37:54.906290] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:35.305 [2024-07-26 16:37:54.906312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:35.305 [2024-07-26 16:37:54.906332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:35.305 [2024-07-26 16:37:54.906384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.305 [2024-07-26 16:37:54.916015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:35.305 [2024-07-26 16:37:54.916227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.305 [2024-07-26 16:37:54.916265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:35.305 [2024-07-26 16:37:54.916289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:35.305 [2024-07-26 16:37:54.916322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:35.305 [2024-07-26 16:37:54.916364] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:35.305 [2024-07-26 16:37:54.916386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:35.305 [2024-07-26 16:37:54.916406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:35.305 [2024-07-26 16:37:54.916450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.305 [2024-07-26 16:37:54.926133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:35.305 [2024-07-26 16:37:54.926378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.305 [2024-07-26 16:37:54.926416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:35.305 [2024-07-26 16:37:54.926440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:35.305 [2024-07-26 16:37:54.926473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:35.305 [2024-07-26 16:37:54.926503] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:35.305 [2024-07-26 16:37:54.926524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:35.305 [2024-07-26 16:37:54.926544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:35.305 [2024-07-26 16:37:54.926589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.305 [2024-07-26 16:37:54.936238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:35.305 [2024-07-26 16:37:54.936486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.305 [2024-07-26 16:37:54.936525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:35.305 [2024-07-26 16:37:54.936550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:35.305 [2024-07-26 16:37:54.936598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:35.305 [2024-07-26 16:37:54.936630] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:35.305 [2024-07-26 16:37:54.936672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:35.305 [2024-07-26 16:37:54.936692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:35.305 [2024-07-26 16:37:54.936788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:35.305 [2024-07-26 16:37:54.946337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:35.305 [2024-07-26 16:37:54.946620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.305 [2024-07-26 16:37:54.946659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:35.305 [2024-07-26 16:37:54.946684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:35.305 [2024-07-26 16:37:54.946718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:35.305 [2024-07-26 16:37:54.946779] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:35.305 [2024-07-26 16:37:54.946804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:35.305 [2024-07-26 16:37:54.946839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:35.305 [2024-07-26 16:37:54.946873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.305 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.305 [2024-07-26 16:37:54.956459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:35.305 [2024-07-26 16:37:54.956738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.305 [2024-07-26 16:37:54.956779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:35.305 [2024-07-26 16:37:54.956806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:35.305 [2024-07-26 16:37:54.956843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:35.305 [2024-07-26 16:37:54.956905] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:35.305 [2024-07-26 16:37:54.956935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:35.306 [2024-07-26 16:37:54.956958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:35.306 [2024-07-26 16:37:54.956991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.306 [2024-07-26 16:37:54.966583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:35.306 [2024-07-26 16:37:54.966822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.306 [2024-07-26 16:37:54.966858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:35.306 [2024-07-26 16:37:54.966882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:35.306 [2024-07-26 16:37:54.966913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:35.306 [2024-07-26 16:37:54.966975] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:35.306 [2024-07-26 16:37:54.967002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:35.306 [2024-07-26 16:37:54.967037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:35.306 [2024-07-26 16:37:54.967076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.306 [2024-07-26 16:37:54.976686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:35.306 [2024-07-26 16:37:54.976928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.306 [2024-07-26 16:37:54.976969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:35.306 [2024-07-26 16:37:54.976996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:35.306 [2024-07-26 16:37:54.977033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:35.306 [2024-07-26 16:37:54.977111] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:35.306 [2024-07-26 16:37:54.977139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:35.306 [2024-07-26 16:37:54.977159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:35.306 [2024-07-26 16:37:54.977187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:35.306 [2024-07-26 16:37:54.979135] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:35.306 [2024-07-26 16:37:54.979176] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:35.306 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:32:35.306 16:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:36.239 16:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:36.239 16:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:36.239 16:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:36.239 16:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:36.239 16:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:36.239 16:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.239 16:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.239 16:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:36.239 16:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:36.239 16:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.498 16:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.872 [2024-07-26 16:37:57.275690] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:37.872 [2024-07-26 16:37:57.275756] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:37.873 [2024-07-26 16:37:57.275808] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:37.873 [2024-07-26 16:37:57.404269] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:37.873 [2024-07-26 16:37:57.470550] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:37.873 [2024-07-26 16:37:57.470627] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.873 request: 00:32:37.873 { 00:32:37.873 "name": "nvme", 00:32:37.873 "trtype": "tcp", 00:32:37.873 "traddr": "10.0.0.2", 00:32:37.873 "adrfam": "ipv4", 00:32:37.873 "trsvcid": "8009", 00:32:37.873 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:37.873 "wait_for_attach": true, 00:32:37.873 "method": "bdev_nvme_start_discovery", 00:32:37.873 "req_id": 1 00:32:37.873 } 00:32:37.873 Got JSON-RPC error response 00:32:37.873 response: 00:32:37.873 { 00:32:37.873 "code": -17, 00:32:37.873 "message": "File exists" 00:32:37.873 } 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.873 request: 00:32:37.873 { 00:32:37.873 "name": "nvme_second", 00:32:37.873 "trtype": "tcp", 00:32:37.873 "traddr": "10.0.0.2", 00:32:37.873 "adrfam": "ipv4", 00:32:37.873 "trsvcid": "8009", 00:32:37.873 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:37.873 "wait_for_attach": true, 00:32:37.873 "method": "bdev_nvme_start_discovery", 00:32:37.873 "req_id": 1 00:32:37.873 } 00:32:37.873 Got JSON-RPC error response 00:32:37.873 response: 00:32:37.873 { 00:32:37.873 "code": -17, 00:32:37.873 "message": "File exists" 00:32:37.873 } 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:37.873 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:38.130 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.131 16:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.062 [2024-07-26 16:37:58.686749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.062 [2024-07-26 16:37:58.686858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=8010 00:32:39.062 [2024-07-26 16:37:58.686952] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:39.062 [2024-07-26 16:37:58.686978] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:39.062 [2024-07-26 16:37:58.687000] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:39.994 [2024-07-26 16:37:59.689034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:39.994 [2024-07-26 16:37:59.689128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3680 with addr=10.0.0.2, port=8010 00:32:39.994 [2024-07-26 16:37:59.689206] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:39.994 [2024-07-26 16:37:59.689232] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:39.994 [2024-07-26 16:37:59.689253] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:41.367 [2024-07-26 16:38:00.691051] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:41.367 request: 00:32:41.367 { 00:32:41.367 "name": "nvme_second", 00:32:41.367 "trtype": "tcp", 00:32:41.367 "traddr": "10.0.0.2", 00:32:41.367 "adrfam": "ipv4", 00:32:41.367 "trsvcid": "8010", 00:32:41.367 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:41.367 "wait_for_attach": false, 00:32:41.367 "attach_timeout_ms": 3000, 00:32:41.367 "method": "bdev_nvme_start_discovery", 00:32:41.367 "req_id": 1 00:32:41.367 } 00:32:41.367 Got JSON-RPC error response 00:32:41.367 response: 00:32:41.367 { 00:32:41.367 "code": -110, 00:32:41.367 "message": "Connection timed out" 00:32:41.367 } 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 782469 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:41.367 rmmod nvme_tcp 00:32:41.367 rmmod nvme_fabrics 00:32:41.367 rmmod nvme_keyring 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 782316 ']' 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 782316 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 782316 ']' 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 782316 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 782316 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 782316' 00:32:41.367 killing process with pid 782316 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 782316 00:32:41.367 16:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 782316 00:32:42.741 16:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:42.741 16:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:42.741 16:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:42.741 16:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:42.741 16:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:42.741 16:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.741 16:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.741 16:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:44.640 00:32:44.640 real 0m17.442s 00:32:44.640 user 0m27.015s 00:32:44.640 sys 0m3.115s 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.640 ************************************ 00:32:44.640 END TEST nvmf_host_discovery 00:32:44.640 ************************************ 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.640 ************************************ 00:32:44.640 START TEST nvmf_host_multipath_status 00:32:44.640 ************************************ 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:44.640 * Looking for test storage... 00:32:44.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:44.640 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:32:44.641 16:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:46.543 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.543 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:32:46.543 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:46.543 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:46.543 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:46.543 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:46.543 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:46.543 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:32:46.543 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:46.544 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:46.544 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:46.544 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:46.544 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.544 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:46.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:32:46.803 00:32:46.803 --- 10.0.0.2 ping statistics --- 00:32:46.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.803 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:32:46.803 00:32:46.803 --- 10.0.0.1 ping statistics --- 00:32:46.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.803 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=785904 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 785904 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 785904 ']' 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.803 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:46.804 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.804 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:46.804 16:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:46.804 [2024-07-26 16:38:06.530579] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:46.804 [2024-07-26 16:38:06.530740] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.062 EAL: No free 2048 kB hugepages reported on node 1 00:32:47.062 [2024-07-26 16:38:06.669953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:47.321 [2024-07-26 16:38:06.904440] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:47.321 [2024-07-26 16:38:06.904510] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:47.321 [2024-07-26 16:38:06.904553] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:47.321 [2024-07-26 16:38:06.904571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:47.321 [2024-07-26 16:38:06.904589] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:47.321 [2024-07-26 16:38:06.904911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.321 [2024-07-26 16:38:06.904919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.886 16:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:47.886 16:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:47.887 16:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:47.887 16:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:47.887 16:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:47.887 16:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.887 16:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=785904 00:32:47.887 16:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:48.144 [2024-07-26 16:38:07.783685] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:48.144 16:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:48.403 Malloc0 00:32:48.403 16:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:48.968 16:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:48.968 16:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:49.225 [2024-07-26 16:38:08.967304] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.513 16:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:49.513 [2024-07-26 16:38:09.212047] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:49.513 16:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=786317 00:32:49.513 16:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:49.513 16:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:49.513 16:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 786317 /var/tmp/bdevperf.sock 00:32:49.513 16:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 786317 ']' 00:32:49.513 16:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:49.513 16:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:49.513 16:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:49.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:49.513 16:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:49.513 16:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:50.888 16:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:50.888 16:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:50.888 16:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:50.888 16:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:32:51.147 Nvme0n1 00:32:51.147 16:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:51.712 Nvme0n1 00:32:51.712 16:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:51.712 16:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:53.612 16:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:53.612 16:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:53.870 16:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:54.127 16:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:55.061 16:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:55.061 16:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:55.061 16:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.061 16:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:55.319 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.319 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:55.319 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.319 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:55.577 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:55.577 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:55.577 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.577 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:55.835 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.835 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:55.835 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.835 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:56.093 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.093 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:56.093 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.093 16:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:56.351 16:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.351 16:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:56.351 16:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.351 16:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:56.610 16:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.610 16:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:56.610 16:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:56.868 16:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:57.126 16:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:58.061 16:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:58.061 16:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:58.061 16:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.061 16:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:58.319 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:58.319 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:58.319 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.319 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:58.577 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.578 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:58.578 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.578 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:58.836 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.836 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:58.836 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.836 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:59.094 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.094 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:59.094 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.094 16:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:59.352 16:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.352 16:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:59.352 16:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.352 16:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:59.610 16:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.610 16:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:59.610 16:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:59.868 16:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:00.126 16:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:01.500 16:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:01.500 16:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:01.500 16:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.500 16:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:01.501 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.501 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:01.501 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.501 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:01.759 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:01.759 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:01.759 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.759 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:02.017 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.017 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:02.017 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.017 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:02.275 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.275 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:02.275 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.275 16:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:02.533 16:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.533 16:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:02.533 16:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.533 16:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:02.791 16:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.791 16:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:02.791 16:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:03.048 16:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:03.306 16:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:04.290 16:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:04.290 16:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:04.290 16:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.290 16:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:04.548 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.548 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:04.548 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.548 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:04.806 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:04.806 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:04.806 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.806 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:05.064 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.064 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:05.064 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.064 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:05.322 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.322 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:05.322 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.322 16:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:05.580 16:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.580 16:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:05.580 16:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.580 16:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:05.837 16:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:05.837 16:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:05.837 16:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:06.095 16:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:06.352 16:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:07.284 16:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:07.284 16:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:07.284 16:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.284 16:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:07.542 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.542 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:07.542 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.542 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:07.799 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.799 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:07.799 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.799 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:08.057 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.057 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:08.057 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.057 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:08.315 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.315 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:08.315 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.315 16:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:08.573 16:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.573 16:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:08.573 16:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.573 16:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:08.831 16:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.831 16:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:08.831 16:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:09.089 16:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:09.347 16:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:10.280 16:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:10.280 16:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:10.280 16:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.280 16:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:10.539 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.539 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:10.539 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.539 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:10.797 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.797 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:10.797 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.797 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:11.055 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.055 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:11.056 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.056 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:11.314 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.314 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:11.314 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.314 16:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:11.572 16:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:11.572 16:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:11.572 16:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.572 16:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:11.831 16:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.831 16:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:12.089 16:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:12.089 16:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:12.347 16:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:12.606 16:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:13.541 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:13.541 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:13.541 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.541 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:13.798 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.799 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:13.799 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.799 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:14.056 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.056 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:14.056 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.056 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:14.314 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.314 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:14.314 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.314 16:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:14.572 16:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.572 16:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:14.572 16:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.572 16:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:14.830 16:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.830 16:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:14.830 16:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.830 16:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:15.088 16:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.088 16:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:15.088 16:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:15.346 16:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:15.605 16:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:16.540 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:16.540 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:16.540 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.540 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:16.798 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:16.798 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:16.798 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.798 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:17.067 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.067 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:17.068 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.068 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:17.372 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.372 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:17.372 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.372 16:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:17.630 16:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.630 16:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:17.630 16:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.630 16:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:17.888 16:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.888 16:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:17.888 16:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.888 16:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:18.146 16:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.146 16:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:18.146 16:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:18.404 16:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:18.663 16:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:19.596 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:19.596 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:19.596 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.596 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:19.854 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.854 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:19.854 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.854 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:20.111 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.112 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:20.112 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.112 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:20.370 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.370 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:20.370 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.370 16:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:20.628 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.628 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:20.628 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.628 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:20.886 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.886 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:20.886 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.886 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:21.145 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.145 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:21.145 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:21.403 16:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:21.661 16:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:22.628 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:22.628 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:22.628 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.628 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:22.886 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.886 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:22.886 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.886 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:23.144 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:23.144 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:23.144 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.144 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:23.402 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.402 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:23.402 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.402 16:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:23.659 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.659 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:23.659 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.659 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:23.916 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.916 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:23.916 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.916 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:24.174 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:24.174 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 786317 00:33:24.174 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 786317 ']' 00:33:24.174 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 786317 00:33:24.174 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:24.174 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:24.174 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 786317 00:33:24.174 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:33:24.174 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:33:24.174 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 786317' 00:33:24.175 killing process with pid 786317 00:33:24.175 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 786317 00:33:24.175 16:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 786317 00:33:24.741 Connection closed with partial response: 00:33:24.741 00:33:24.741 00:33:25.314 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 786317 00:33:25.314 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:25.314 [2024-07-26 16:38:09.310393] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:25.314 [2024-07-26 16:38:09.310557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786317 ] 00:33:25.314 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.314 [2024-07-26 16:38:09.434250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.314 [2024-07-26 16:38:09.673388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:25.314 Running I/O for 90 seconds... 00:33:25.314 [2024-07-26 16:38:25.620831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.620939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.621053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.621093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.621152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.621178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.621214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.621240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.621275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.621300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.621343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.621383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.621418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.621442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.621475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.621499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.621696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.621742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.621799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.621826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.621865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.621902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.621939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.621965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.622017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.622041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.622105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.622145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.622180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.622204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.622238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.622262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.622363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.622393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.622449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.622474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.622510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.622535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.622570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.314 [2024-07-26 16:38:25.622594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.622629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.314 [2024-07-26 16:38:25.622652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.622703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.314 [2024-07-26 16:38:25.622726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:25.314 [2024-07-26 16:38:25.622760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.622783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.622823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.622846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.622880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.622903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.622936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.622959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.622992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.623016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.623074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.623101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.623137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.623161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.623196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.623220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.623254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.623278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.623312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.623336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.623386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.623409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.623442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.623465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.623498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.623522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.623559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.623582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.623616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.623638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.623672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.623695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.623728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.623751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.623784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.623808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.624356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.624400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.624441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.624465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.624501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.624541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.624580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.624604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.624642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.624667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.624704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.624729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.624768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.624792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.624844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.624874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.624927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.624951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.624987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.625010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.625069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.625095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.625150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.625175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.625213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.625238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.625277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.625301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.625339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.625364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.625401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.625442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.625480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.625504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.625602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.625630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.625673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.625698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.625736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.625766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.625805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.625829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.625868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.625892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.625930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.625954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.625994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.315 [2024-07-26 16:38:25.626018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.626081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.626108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.626147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.626173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.626230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.626255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.626294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.626319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.626359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.626399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.626437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.626461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.626499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.626523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.626561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.626594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.626635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.626661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.626699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.626725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.626764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.626787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.626825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.626852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.626891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.626915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.626953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.626978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.627016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.627057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.627108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.627136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.627176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.627202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.627241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.627267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.627307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.627333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.627386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.627411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.627455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.627481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.627519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.627544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.627583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.627608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.627646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.627671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.627709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.627734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.627773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.627797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.627835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.627860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.627898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.627923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:25.315 [2024-07-26 16:38:25.627962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.315 [2024-07-26 16:38:25.627989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.628027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.628077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.628121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.628150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.628190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.628217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.628261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.628289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.628328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.628368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.628408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.628434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.628473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.628498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.628537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.628561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.628600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.628626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.628664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.628689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.628727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.628752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.628791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.628816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.628854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.628879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.628916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.628942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.628980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.629004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.629065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.629098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.629140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.629167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.629207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.629241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.629281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.629307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.629347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.629374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.629429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.629454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.629491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.629517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.629555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.629580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.629617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.629642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.629680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.629705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.629743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.629767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.629804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.629830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.629869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.629899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.630243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.630275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.630338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.630367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.630413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.630440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.630485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.630526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.630572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.630599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.630643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:25.630669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:25.630713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:25.630739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.213993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.214099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.214160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.214189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.214228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.214254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.214292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:41.214317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.214354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:41.214380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.214427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:41.214453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.214504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.214529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.214565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.214589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.214624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.214648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.214683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.214708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.214743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.214767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.214802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.214826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.214862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.214886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.214921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.214945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.214981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.215006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.215091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:41.215155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:41.215223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:41.215284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:41.215345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:41.215422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:41.215482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:41.215541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.215599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.215657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.215717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.215776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.215834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.215893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.215960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.215996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.216021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.216056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.216106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.216144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.316 [2024-07-26 16:38:41.216169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.216205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:41.216230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.216266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.316 [2024-07-26 16:38:41.216291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:25.316 [2024-07-26 16:38:41.216326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.317 [2024-07-26 16:38:41.216351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.216402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.317 [2024-07-26 16:38:41.216427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.216463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.317 [2024-07-26 16:38:41.216488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.219114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.317 [2024-07-26 16:38:41.219151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.219197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.317 [2024-07-26 16:38:41.219223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.219260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.317 [2024-07-26 16:38:41.219285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.219321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.317 [2024-07-26 16:38:41.219351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.219405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.317 [2024-07-26 16:38:41.219430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.219466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.317 [2024-07-26 16:38:41.219516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.219556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.317 [2024-07-26 16:38:41.219581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.219617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.317 [2024-07-26 16:38:41.219644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.219681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.317 [2024-07-26 16:38:41.219706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.219741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.317 [2024-07-26 16:38:41.219766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.219802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.317 [2024-07-26 16:38:41.219827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.219879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.317 [2024-07-26 16:38:41.219921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.219960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.317 [2024-07-26 16:38:41.219986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.220021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.317 [2024-07-26 16:38:41.220068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.220110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.317 [2024-07-26 16:38:41.220137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.220173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.317 [2024-07-26 16:38:41.220198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:25.317 [2024-07-26 16:38:41.220246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.317 [2024-07-26 16:38:41.220272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.220310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.220334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.220371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.220395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.220432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.220456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.220493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.220518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.220554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.220579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.220615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.220639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.220675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.220700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.220735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.220776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.220812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.220837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.222459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.222494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.222537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.222563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.222605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.222632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.222683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.222708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.222770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.222796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.222832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.222857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.222893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.222917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.222953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.222978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.223038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.223110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.223171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.223231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.223290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.223350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.223416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.223478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.223538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.223614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.223672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.223731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.223789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.223848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.223907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.223965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.223998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.224022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.224081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.224108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.224145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.224174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.224211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.224236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.224271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.224296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.224331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.224356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.224408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.224433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.224467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.224491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.224525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.224549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.224584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.224608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.224642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.224666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.224701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.224729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.224765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.224789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.224823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.224847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.224881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.224910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.224946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.224970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.225004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.225028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.225087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.225117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.225155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.225180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.225216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.225242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.226654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.226701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.226744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.226770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.226805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.226830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.226865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.226889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.226944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.226971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.227009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.227053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.227105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.227131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.228371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.228405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.228468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.228495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.228530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.228556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.228591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.228616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.228652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.228676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.228709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.228733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.228768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.228808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.228843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.228866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.228899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.228923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.228957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.228980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.229013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.318 [2024-07-26 16:38:41.229052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:25.318 [2024-07-26 16:38:41.229101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.318 [2024-07-26 16:38:41.229128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.229169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.229197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.229232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.229257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.229292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.229318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.229352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.229393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.229441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.229468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.229503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.229528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.229563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.229588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.229623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.229647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.229682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.229707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.229742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.229767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.229802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.229827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.229862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.229886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.229921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.229951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.229986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.230011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.230046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.230100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.230139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.230164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.230200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.230224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.230260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.230285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.230321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.230346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.231446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.231478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.231534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.231558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.231592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.231616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.231649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.231673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.231706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.231729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.231762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.231813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.231851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.231875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.231909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.231932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.231966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.231990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.232025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.232072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.232112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.232138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.233937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.233971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.234041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.234113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.234174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.234234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.234293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.234353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.234449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.234507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.234563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.234620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.234677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.234733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.234789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.234844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.234900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.234956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.234990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.235012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.235068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.235095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.235136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.235161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.235197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.235238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.235276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.235301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.235337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.235382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.235433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.235456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.235490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.235513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.235546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.235568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.235601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.235625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.235657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.235680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.235712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.235736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.235768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.235791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.235825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.235848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.235881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.235908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.235942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.235965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.239675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.319 [2024-07-26 16:38:41.239726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.239808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.239837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.239873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.239897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.239930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.239953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.239986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.240009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.240057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.240099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.240137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.240163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.240198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.240224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:25.319 [2024-07-26 16:38:41.240258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.319 [2024-07-26 16:38:41.240283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.240318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.240343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.240394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.240442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.240478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.240503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.240535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.240558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.240590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.240614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.240647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.240671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.240705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.240728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.240760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.240787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.240819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.240842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.240874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.240897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.240930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.240955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.240987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.241010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.241057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.241093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.241130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.241157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.241196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.241221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.241256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.241280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.241314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.241340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.241391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.241415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.241448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.241472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.241504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.241529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.241562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.241585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.241617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.241641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.241673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.241696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.241730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.241755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.244475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.244527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.244581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.244609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.244651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.244678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.244713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.244739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.244774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.244799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.244834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.244859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.244894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.244934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.244968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.244993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.245074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.245138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.245197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.245258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.245318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.245394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.245456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.245515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.245571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.245626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.245708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.245778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.245838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.245897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.245956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.245991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.246016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.246873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.246920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.246962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.247003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.247052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.247093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.247149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.247175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.247211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.247236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.247271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.247295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.247331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.247371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.247426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.247453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.247503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.247528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.247562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.247586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.247620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.247644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.247678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.247702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.247736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.247762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.247796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.247821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.247872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.247895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.247933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.320 [2024-07-26 16:38:41.247974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.248010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.320 [2024-07-26 16:38:41.248044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:25.320 [2024-07-26 16:38:41.250028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.250074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.250121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.250147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.250183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.250208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.250243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.250268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.250302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.250327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.250378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.250417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.250451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.250474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.250506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.250529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.250562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.250587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.250620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.250643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.250681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.250705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.250737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.250762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.250794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.250817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.250849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.250872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.250904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.250927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.250959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.250982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.251054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.251145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.251206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.251266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.251325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.251417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.251483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.251539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.251594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.251649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.251707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.251762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.251818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.251874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.251929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.251961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.251985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.252017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.252057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.252118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.252145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.255332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.255390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.255436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.255478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.255514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.255538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.255571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.255594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.255626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.255673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.255726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.255752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.255787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.255812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.255848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.255918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.255959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.255985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.256069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.256134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.256192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.256255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.256315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.256387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.256444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.256500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.256572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.256648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.256707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.256765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.256825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.256884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.256959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.256993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.257033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.257082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.257109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.257144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.257170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.257205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.257230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.257264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.257288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.257323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.257363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.257398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.257437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.257471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.257494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.257528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.257551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.257583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.257606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.257639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.257663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.258944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.258996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.259052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.321 [2024-07-26 16:38:41.259087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.259129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.259155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.259189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.321 [2024-07-26 16:38:41.259212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:25.321 [2024-07-26 16:38:41.259246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.259270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.259310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.259335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.259369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.259394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.259428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.259452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.259485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.259509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.259542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.259566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.259616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.259641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.259675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.259700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.259735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.259760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.261412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.261447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.261489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.261521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.261557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.261582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.261633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.261658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.261707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.261730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.261763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.261786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.261818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.261840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.261872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.261895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.261927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.261950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.261982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.262005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.262052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.262099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.262158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.262186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.262221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.262259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.262296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.262327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.262364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.262389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.262424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.262448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.262483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.262524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.262574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.262598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.262631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.262654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.262686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.262710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.262743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.262766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.262799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.262821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.262854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.262878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.262911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.262947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.262983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.263007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.263041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.263091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.263136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.263161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.263196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.263220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.263254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.263278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.263311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.263336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.263385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.263409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.263443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.263467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.266456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.266505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.266597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.266630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.266668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.266694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.266730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.266755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.266790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.266815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.266850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.266876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.266933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.322 [2024-07-26 16:38:41.266959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.266995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.267036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.267097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.267125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.267179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.267228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:25.322 [2024-07-26 16:38:41.267269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.322 [2024-07-26 16:38:41.267296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:25.322 Received shutdown signal, test time was about 32.374693 seconds 00:33:25.322 00:33:25.322 Latency(us) 00:33:25.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.322 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:25.322 Verification LBA range: start 0x0 length 0x4000 00:33:25.322 Nvme0n1 : 32.37 5782.29 22.59 0.00 0.00 22099.52 1025.52 4026531.84 00:33:25.322 =================================================================================================================== 00:33:25.322 Total : 5782.29 22.59 0.00 0.00 22099.52 1025.52 4026531.84 00:33:25.322 16:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:25.322 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:25.322 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:25.322 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:25.322 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:25.322 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:33:25.322 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:25.322 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:33:25.322 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:25.322 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:25.322 rmmod nvme_tcp 00:33:25.580 rmmod nvme_fabrics 00:33:25.580 rmmod nvme_keyring 00:33:25.580 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:25.580 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:33:25.580 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:33:25.580 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 785904 ']' 00:33:25.580 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 785904 00:33:25.580 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 785904 ']' 00:33:25.580 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 785904 00:33:25.580 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:25.580 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:25.580 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 785904 00:33:25.580 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:25.580 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:25.580 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 785904' 00:33:25.580 killing process with pid 785904 00:33:25.580 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 785904 00:33:25.580 16:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 785904 00:33:26.954 16:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:26.954 16:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:26.954 16:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:26.954 16:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:26.954 16:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:26.954 16:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.954 16:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:26.954 16:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:29.487 00:33:29.487 real 0m44.461s 00:33:29.487 user 2m11.049s 00:33:29.487 sys 0m10.555s 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:29.487 ************************************ 00:33:29.487 END TEST nvmf_host_multipath_status 00:33:29.487 ************************************ 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.487 ************************************ 00:33:29.487 START TEST nvmf_discovery_remove_ifc 00:33:29.487 ************************************ 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:29.487 * Looking for test storage... 00:33:29.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.487 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:33:29.488 16:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.387 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.387 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:33:31.387 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:31.387 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:31.387 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:31.387 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:31.387 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:31.387 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:31.388 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:31.388 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:31.388 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:31.388 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:31.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:31.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:33:31.388 00:33:31.388 --- 10.0.0.2 ping statistics --- 00:33:31.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.388 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:31.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:31.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:33:31.388 00:33:31.388 --- 10.0.0.1 ping statistics --- 00:33:31.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.388 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:31.388 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=792769 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 792769 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 792769 ']' 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:31.389 16:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.389 [2024-07-26 16:38:51.014193] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:31.389 [2024-07-26 16:38:51.014329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.389 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.647 [2024-07-26 16:38:51.158433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.647 [2024-07-26 16:38:51.393535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.647 [2024-07-26 16:38:51.393606] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.647 [2024-07-26 16:38:51.393634] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.647 [2024-07-26 16:38:51.393660] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.647 [2024-07-26 16:38:51.393682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.647 [2024-07-26 16:38:51.393736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.213 16:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:32.213 16:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:32.213 16:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:32.213 16:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:32.213 16:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.471 16:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.471 16:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:32.471 16:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.471 16:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.471 [2024-07-26 16:38:52.012984] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.471 [2024-07-26 16:38:52.021230] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:32.471 null0 00:33:32.471 [2024-07-26 16:38:52.053118] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.471 16:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.471 16:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=792920 00:33:32.471 16:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:32.471 16:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 792920 /tmp/host.sock 00:33:32.471 16:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 792920 ']' 00:33:32.471 16:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:33:32.471 16:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:32.471 16:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:32.471 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:32.471 16:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:32.471 16:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.471 [2024-07-26 16:38:52.159881] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:32.472 [2024-07-26 16:38:52.160015] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792920 ] 00:33:32.472 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.730 [2024-07-26 16:38:52.291225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.988 [2024-07-26 16:38:52.527554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.553 16:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:33.553 16:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:33.553 16:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:33.553 16:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:33.553 16:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.553 16:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.553 16:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.554 16:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:33.554 16:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.554 16:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.811 16:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.811 16:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:33.811 16:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.811 16:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.184 [2024-07-26 16:38:54.507480] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:35.184 [2024-07-26 16:38:54.507537] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:35.184 [2024-07-26 16:38:54.507606] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:35.184 [2024-07-26 16:38:54.635106] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:35.184 [2024-07-26 16:38:54.859494] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:35.184 [2024-07-26 16:38:54.859610] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:35.184 [2024-07-26 16:38:54.859716] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:35.184 [2024-07-26 16:38:54.859770] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:35.184 [2024-07-26 16:38:54.859838] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:35.184 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.184 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:35.184 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:35.184 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.184 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:35.184 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.184 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.184 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:35.184 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:35.184 [2024-07-26 16:38:54.865112] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2780 was disconnected and freed. delete nvme_qpair. 00:33:35.184 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.184 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:35.184 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:35.184 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:35.442 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:35.442 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:35.442 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.442 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:35.442 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.442 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.442 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:35.442 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:35.442 16:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.442 16:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:35.442 16:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:36.374 16:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:36.374 16:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:36.374 16:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.374 16:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:36.374 16:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.374 16:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:36.374 16:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:36.374 16:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.374 16:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:36.374 16:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:37.307 16:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:37.307 16:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.307 16:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:37.307 16:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.307 16:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:37.307 16:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:37.307 16:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:37.565 16:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.565 16:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:37.565 16:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:38.498 16:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:38.498 16:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.498 16:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.498 16:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:38.498 16:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:38.498 16:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:38.498 16:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:38.498 16:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.498 16:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:38.498 16:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:39.431 16:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:39.431 16:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:39.431 16:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:39.431 16:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.431 16:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.431 16:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:39.431 16:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:39.431 16:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.431 16:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:39.431 16:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:40.803 16:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:40.803 16:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.803 16:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:40.803 16:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.803 16:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:40.803 16:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:40.803 16:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:40.803 16:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.803 16:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:40.803 16:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:40.803 [2024-07-26 16:39:00.300407] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:40.803 [2024-07-26 16:39:00.300510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.803 [2024-07-26 16:39:00.300555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.803 [2024-07-26 16:39:00.300599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.803 [2024-07-26 16:39:00.300623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.803 [2024-07-26 16:39:00.300647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.803 [2024-07-26 16:39:00.300672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.803 [2024-07-26 16:39:00.300696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.803 [2024-07-26 16:39:00.300720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.803 [2024-07-26 16:39:00.300745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.803 [2024-07-26 16:39:00.300770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.803 [2024-07-26 16:39:00.300792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:33:40.803 [2024-07-26 16:39:00.310426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:40.803 [2024-07-26 16:39:00.320485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:41.735 16:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.735 16:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.735 16:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.735 16:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.735 16:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.735 16:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.735 16:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.735 [2024-07-26 16:39:01.325149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:41.735 [2024-07-26 16:39:01.325254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:41.735 [2024-07-26 16:39:01.325295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:33:41.735 [2024-07-26 16:39:01.325371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:41.735 [2024-07-26 16:39:01.326143] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:41.735 [2024-07-26 16:39:01.326214] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:41.735 [2024-07-26 16:39:01.326241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:41.735 [2024-07-26 16:39:01.326266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:41.735 [2024-07-26 16:39:01.326317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:41.735 [2024-07-26 16:39:01.326359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:41.736 16:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.736 16:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:41.736 16:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:42.669 [2024-07-26 16:39:02.328918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:42.669 [2024-07-26 16:39:02.328986] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:42.669 [2024-07-26 16:39:02.329011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:42.669 [2024-07-26 16:39:02.329036] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:33:42.669 [2024-07-26 16:39:02.329094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:42.669 [2024-07-26 16:39:02.329173] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:42.669 [2024-07-26 16:39:02.329263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.669 [2024-07-26 16:39:02.329294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.669 [2024-07-26 16:39:02.329322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.669 [2024-07-26 16:39:02.329360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.669 [2024-07-26 16:39:02.329384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.669 [2024-07-26 16:39:02.329416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.669 [2024-07-26 16:39:02.329441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.669 [2024-07-26 16:39:02.329464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.669 [2024-07-26 16:39:02.329488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.669 [2024-07-26 16:39:02.329511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.669 [2024-07-26 16:39:02.329533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:42.669 [2024-07-26 16:39:02.329621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:42.669 [2024-07-26 16:39:02.330604] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:42.669 [2024-07-26 16:39:02.330641] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:42.669 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:42.927 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.927 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:42.927 16:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:43.861 16:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:43.861 16:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.861 16:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.861 16:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:43.861 16:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.861 16:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:43.861 16:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:43.861 16:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.861 16:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:43.861 16:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:44.796 [2024-07-26 16:39:04.384313] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:44.796 [2024-07-26 16:39:04.384388] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:44.796 [2024-07-26 16:39:04.384435] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:44.796 [2024-07-26 16:39:04.470745] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:44.796 16:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:44.796 16:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.796 16:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:44.796 16:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.796 16:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.796 16:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:44.796 16:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:44.796 16:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.796 16:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:44.796 16:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:45.055 [2024-07-26 16:39:04.697083] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:45.055 [2024-07-26 16:39:04.697194] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:45.055 [2024-07-26 16:39:04.697273] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:45.055 [2024-07-26 16:39:04.697312] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:45.055 [2024-07-26 16:39:04.697357] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:45.055 [2024-07-26 16:39:04.701728] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2f00 was disconnected and freed. delete nvme_qpair. 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 792920 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 792920 ']' 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 792920 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 792920 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 792920' 00:33:45.992 killing process with pid 792920 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 792920 00:33:45.992 16:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 792920 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:47.421 rmmod nvme_tcp 00:33:47.421 rmmod nvme_fabrics 00:33:47.421 rmmod nvme_keyring 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 792769 ']' 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 792769 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 792769 ']' 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 792769 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 792769 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 792769' 00:33:47.421 killing process with pid 792769 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 792769 00:33:47.421 16:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 792769 00:33:48.361 16:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:48.361 16:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:48.361 16:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:48.361 16:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:48.361 16:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:48.361 16:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.361 16:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.361 16:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:50.898 00:33:50.898 real 0m21.427s 00:33:50.898 user 0m31.550s 00:33:50.898 sys 0m3.291s 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:50.898 ************************************ 00:33:50.898 END TEST nvmf_discovery_remove_ifc 00:33:50.898 ************************************ 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.898 ************************************ 00:33:50.898 START TEST nvmf_identify_kernel_target 00:33:50.898 ************************************ 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:50.898 * Looking for test storage... 00:33:50.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.898 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:33:50.899 16:39:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:52.804 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:52.805 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:52.805 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:52.805 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:52.805 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:52.805 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:52.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:52.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:33:52.806 00:33:52.806 --- 10.0.0.2 ping statistics --- 00:33:52.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.806 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:52.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:52.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:33:52.806 00:33:52.806 --- 10.0.0.1 ping statistics --- 00:33:52.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.806 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:52.806 16:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:53.741 Waiting for block devices as requested 00:33:53.999 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:53.999 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:53.999 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:54.257 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:54.257 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:54.257 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:54.257 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:54.517 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:54.517 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:54.517 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:54.517 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:54.776 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:54.776 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:54.776 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:54.776 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:55.036 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:55.036 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:55.297 No valid GPT data, bailing 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:55.297 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:55.297 00:33:55.297 Discovery Log Number of Records 2, Generation counter 2 00:33:55.297 =====Discovery Log Entry 0====== 00:33:55.297 trtype: tcp 00:33:55.297 adrfam: ipv4 00:33:55.298 subtype: current discovery subsystem 00:33:55.298 treq: not specified, sq flow control disable supported 00:33:55.298 portid: 1 00:33:55.298 trsvcid: 4420 00:33:55.298 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:55.298 traddr: 10.0.0.1 00:33:55.298 eflags: none 00:33:55.298 sectype: none 00:33:55.298 =====Discovery Log Entry 1====== 00:33:55.298 trtype: tcp 00:33:55.298 adrfam: ipv4 00:33:55.298 subtype: nvme subsystem 00:33:55.298 treq: not specified, sq flow control disable supported 00:33:55.298 portid: 1 00:33:55.298 trsvcid: 4420 00:33:55.298 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:55.298 traddr: 10.0.0.1 00:33:55.298 eflags: none 00:33:55.298 sectype: none 00:33:55.298 16:39:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:55.298 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:55.298 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.559 ===================================================== 00:33:55.559 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:55.559 ===================================================== 00:33:55.559 Controller Capabilities/Features 00:33:55.559 ================================ 00:33:55.559 Vendor ID: 0000 00:33:55.559 Subsystem Vendor ID: 0000 00:33:55.559 Serial Number: 97b18ac6e1dd06556433 00:33:55.559 Model Number: Linux 00:33:55.559 Firmware Version: 6.7.0-68 00:33:55.559 Recommended Arb Burst: 0 00:33:55.559 IEEE OUI Identifier: 00 00 00 00:33:55.559 Multi-path I/O 00:33:55.559 May have multiple subsystem ports: No 00:33:55.559 May have multiple controllers: No 00:33:55.559 Associated with SR-IOV VF: No 00:33:55.559 Max Data Transfer Size: Unlimited 00:33:55.559 Max Number of Namespaces: 0 00:33:55.559 Max Number of I/O Queues: 1024 00:33:55.559 NVMe Specification Version (VS): 1.3 00:33:55.559 NVMe Specification Version (Identify): 1.3 00:33:55.559 Maximum Queue Entries: 1024 00:33:55.559 Contiguous Queues Required: No 00:33:55.559 Arbitration Mechanisms Supported 00:33:55.559 Weighted Round Robin: Not Supported 00:33:55.559 Vendor Specific: Not Supported 00:33:55.559 Reset Timeout: 7500 ms 00:33:55.559 Doorbell Stride: 4 bytes 00:33:55.559 NVM Subsystem Reset: Not Supported 00:33:55.559 Command Sets Supported 00:33:55.559 NVM Command Set: Supported 00:33:55.559 Boot Partition: Not Supported 00:33:55.559 Memory Page Size Minimum: 4096 bytes 00:33:55.559 Memory Page Size Maximum: 4096 bytes 00:33:55.559 Persistent Memory Region: Not Supported 00:33:55.559 Optional Asynchronous Events Supported 00:33:55.559 Namespace Attribute Notices: Not Supported 00:33:55.559 Firmware Activation Notices: Not Supported 00:33:55.559 ANA Change Notices: Not Supported 00:33:55.559 PLE Aggregate Log Change Notices: Not Supported 00:33:55.559 LBA Status Info Alert Notices: Not Supported 00:33:55.559 EGE Aggregate Log Change Notices: Not Supported 00:33:55.559 Normal NVM Subsystem Shutdown event: Not Supported 00:33:55.559 Zone Descriptor Change Notices: Not Supported 00:33:55.559 Discovery Log Change Notices: Supported 00:33:55.559 Controller Attributes 00:33:55.559 128-bit Host Identifier: Not Supported 00:33:55.559 Non-Operational Permissive Mode: Not Supported 00:33:55.559 NVM Sets: Not Supported 00:33:55.559 Read Recovery Levels: Not Supported 00:33:55.559 Endurance Groups: Not Supported 00:33:55.559 Predictable Latency Mode: Not Supported 00:33:55.559 Traffic Based Keep ALive: Not Supported 00:33:55.559 Namespace Granularity: Not Supported 00:33:55.559 SQ Associations: Not Supported 00:33:55.559 UUID List: Not Supported 00:33:55.559 Multi-Domain Subsystem: Not Supported 00:33:55.559 Fixed Capacity Management: Not Supported 00:33:55.559 Variable Capacity Management: Not Supported 00:33:55.559 Delete Endurance Group: Not Supported 00:33:55.559 Delete NVM Set: Not Supported 00:33:55.559 Extended LBA Formats Supported: Not Supported 00:33:55.559 Flexible Data Placement Supported: Not Supported 00:33:55.559 00:33:55.559 Controller Memory Buffer Support 00:33:55.559 ================================ 00:33:55.559 Supported: No 00:33:55.559 00:33:55.559 Persistent Memory Region Support 00:33:55.559 ================================ 00:33:55.559 Supported: No 00:33:55.559 00:33:55.559 Admin Command Set Attributes 00:33:55.559 ============================ 00:33:55.559 Security Send/Receive: Not Supported 00:33:55.559 Format NVM: Not Supported 00:33:55.559 Firmware Activate/Download: Not Supported 00:33:55.559 Namespace Management: Not Supported 00:33:55.559 Device Self-Test: Not Supported 00:33:55.559 Directives: Not Supported 00:33:55.559 NVMe-MI: Not Supported 00:33:55.559 Virtualization Management: Not Supported 00:33:55.559 Doorbell Buffer Config: Not Supported 00:33:55.559 Get LBA Status Capability: Not Supported 00:33:55.559 Command & Feature Lockdown Capability: Not Supported 00:33:55.559 Abort Command Limit: 1 00:33:55.559 Async Event Request Limit: 1 00:33:55.559 Number of Firmware Slots: N/A 00:33:55.559 Firmware Slot 1 Read-Only: N/A 00:33:55.559 Firmware Activation Without Reset: N/A 00:33:55.560 Multiple Update Detection Support: N/A 00:33:55.560 Firmware Update Granularity: No Information Provided 00:33:55.560 Per-Namespace SMART Log: No 00:33:55.560 Asymmetric Namespace Access Log Page: Not Supported 00:33:55.560 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:55.560 Command Effects Log Page: Not Supported 00:33:55.560 Get Log Page Extended Data: Supported 00:33:55.560 Telemetry Log Pages: Not Supported 00:33:55.560 Persistent Event Log Pages: Not Supported 00:33:55.560 Supported Log Pages Log Page: May Support 00:33:55.560 Commands Supported & Effects Log Page: Not Supported 00:33:55.560 Feature Identifiers & Effects Log Page:May Support 00:33:55.560 NVMe-MI Commands & Effects Log Page: May Support 00:33:55.560 Data Area 4 for Telemetry Log: Not Supported 00:33:55.560 Error Log Page Entries Supported: 1 00:33:55.560 Keep Alive: Not Supported 00:33:55.560 00:33:55.560 NVM Command Set Attributes 00:33:55.560 ========================== 00:33:55.560 Submission Queue Entry Size 00:33:55.560 Max: 1 00:33:55.560 Min: 1 00:33:55.560 Completion Queue Entry Size 00:33:55.560 Max: 1 00:33:55.560 Min: 1 00:33:55.560 Number of Namespaces: 0 00:33:55.560 Compare Command: Not Supported 00:33:55.560 Write Uncorrectable Command: Not Supported 00:33:55.560 Dataset Management Command: Not Supported 00:33:55.560 Write Zeroes Command: Not Supported 00:33:55.560 Set Features Save Field: Not Supported 00:33:55.560 Reservations: Not Supported 00:33:55.560 Timestamp: Not Supported 00:33:55.560 Copy: Not Supported 00:33:55.560 Volatile Write Cache: Not Present 00:33:55.560 Atomic Write Unit (Normal): 1 00:33:55.560 Atomic Write Unit (PFail): 1 00:33:55.560 Atomic Compare & Write Unit: 1 00:33:55.560 Fused Compare & Write: Not Supported 00:33:55.560 Scatter-Gather List 00:33:55.560 SGL Command Set: Supported 00:33:55.560 SGL Keyed: Not Supported 00:33:55.560 SGL Bit Bucket Descriptor: Not Supported 00:33:55.560 SGL Metadata Pointer: Not Supported 00:33:55.560 Oversized SGL: Not Supported 00:33:55.560 SGL Metadata Address: Not Supported 00:33:55.560 SGL Offset: Supported 00:33:55.560 Transport SGL Data Block: Not Supported 00:33:55.560 Replay Protected Memory Block: Not Supported 00:33:55.560 00:33:55.560 Firmware Slot Information 00:33:55.560 ========================= 00:33:55.560 Active slot: 0 00:33:55.560 00:33:55.560 00:33:55.560 Error Log 00:33:55.560 ========= 00:33:55.560 00:33:55.560 Active Namespaces 00:33:55.560 ================= 00:33:55.560 Discovery Log Page 00:33:55.560 ================== 00:33:55.560 Generation Counter: 2 00:33:55.560 Number of Records: 2 00:33:55.560 Record Format: 0 00:33:55.560 00:33:55.560 Discovery Log Entry 0 00:33:55.560 ---------------------- 00:33:55.560 Transport Type: 3 (TCP) 00:33:55.560 Address Family: 1 (IPv4) 00:33:55.560 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:55.560 Entry Flags: 00:33:55.560 Duplicate Returned Information: 0 00:33:55.560 Explicit Persistent Connection Support for Discovery: 0 00:33:55.560 Transport Requirements: 00:33:55.560 Secure Channel: Not Specified 00:33:55.560 Port ID: 1 (0x0001) 00:33:55.560 Controller ID: 65535 (0xffff) 00:33:55.560 Admin Max SQ Size: 32 00:33:55.560 Transport Service Identifier: 4420 00:33:55.560 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:55.560 Transport Address: 10.0.0.1 00:33:55.560 Discovery Log Entry 1 00:33:55.560 ---------------------- 00:33:55.560 Transport Type: 3 (TCP) 00:33:55.560 Address Family: 1 (IPv4) 00:33:55.560 Subsystem Type: 2 (NVM Subsystem) 00:33:55.560 Entry Flags: 00:33:55.560 Duplicate Returned Information: 0 00:33:55.560 Explicit Persistent Connection Support for Discovery: 0 00:33:55.560 Transport Requirements: 00:33:55.560 Secure Channel: Not Specified 00:33:55.560 Port ID: 1 (0x0001) 00:33:55.560 Controller ID: 65535 (0xffff) 00:33:55.560 Admin Max SQ Size: 32 00:33:55.560 Transport Service Identifier: 4420 00:33:55.560 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:55.560 Transport Address: 10.0.0.1 00:33:55.560 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:55.560 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.560 get_feature(0x01) failed 00:33:55.560 get_feature(0x02) failed 00:33:55.560 get_feature(0x04) failed 00:33:55.560 ===================================================== 00:33:55.560 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:55.560 ===================================================== 00:33:55.560 Controller Capabilities/Features 00:33:55.560 ================================ 00:33:55.560 Vendor ID: 0000 00:33:55.560 Subsystem Vendor ID: 0000 00:33:55.560 Serial Number: 64d9ddd4caa4d3189df0 00:33:55.560 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:55.560 Firmware Version: 6.7.0-68 00:33:55.560 Recommended Arb Burst: 6 00:33:55.560 IEEE OUI Identifier: 00 00 00 00:33:55.560 Multi-path I/O 00:33:55.560 May have multiple subsystem ports: Yes 00:33:55.560 May have multiple controllers: Yes 00:33:55.560 Associated with SR-IOV VF: No 00:33:55.560 Max Data Transfer Size: Unlimited 00:33:55.560 Max Number of Namespaces: 1024 00:33:55.560 Max Number of I/O Queues: 128 00:33:55.560 NVMe Specification Version (VS): 1.3 00:33:55.560 NVMe Specification Version (Identify): 1.3 00:33:55.560 Maximum Queue Entries: 1024 00:33:55.560 Contiguous Queues Required: No 00:33:55.560 Arbitration Mechanisms Supported 00:33:55.560 Weighted Round Robin: Not Supported 00:33:55.560 Vendor Specific: Not Supported 00:33:55.560 Reset Timeout: 7500 ms 00:33:55.560 Doorbell Stride: 4 bytes 00:33:55.560 NVM Subsystem Reset: Not Supported 00:33:55.560 Command Sets Supported 00:33:55.560 NVM Command Set: Supported 00:33:55.560 Boot Partition: Not Supported 00:33:55.560 Memory Page Size Minimum: 4096 bytes 00:33:55.560 Memory Page Size Maximum: 4096 bytes 00:33:55.560 Persistent Memory Region: Not Supported 00:33:55.560 Optional Asynchronous Events Supported 00:33:55.560 Namespace Attribute Notices: Supported 00:33:55.560 Firmware Activation Notices: Not Supported 00:33:55.560 ANA Change Notices: Supported 00:33:55.560 PLE Aggregate Log Change Notices: Not Supported 00:33:55.560 LBA Status Info Alert Notices: Not Supported 00:33:55.560 EGE Aggregate Log Change Notices: Not Supported 00:33:55.560 Normal NVM Subsystem Shutdown event: Not Supported 00:33:55.560 Zone Descriptor Change Notices: Not Supported 00:33:55.560 Discovery Log Change Notices: Not Supported 00:33:55.560 Controller Attributes 00:33:55.560 128-bit Host Identifier: Supported 00:33:55.560 Non-Operational Permissive Mode: Not Supported 00:33:55.560 NVM Sets: Not Supported 00:33:55.560 Read Recovery Levels: Not Supported 00:33:55.560 Endurance Groups: Not Supported 00:33:55.560 Predictable Latency Mode: Not Supported 00:33:55.560 Traffic Based Keep ALive: Supported 00:33:55.561 Namespace Granularity: Not Supported 00:33:55.561 SQ Associations: Not Supported 00:33:55.561 UUID List: Not Supported 00:33:55.561 Multi-Domain Subsystem: Not Supported 00:33:55.561 Fixed Capacity Management: Not Supported 00:33:55.561 Variable Capacity Management: Not Supported 00:33:55.561 Delete Endurance Group: Not Supported 00:33:55.561 Delete NVM Set: Not Supported 00:33:55.561 Extended LBA Formats Supported: Not Supported 00:33:55.561 Flexible Data Placement Supported: Not Supported 00:33:55.561 00:33:55.561 Controller Memory Buffer Support 00:33:55.561 ================================ 00:33:55.561 Supported: No 00:33:55.561 00:33:55.561 Persistent Memory Region Support 00:33:55.561 ================================ 00:33:55.561 Supported: No 00:33:55.561 00:33:55.561 Admin Command Set Attributes 00:33:55.561 ============================ 00:33:55.561 Security Send/Receive: Not Supported 00:33:55.561 Format NVM: Not Supported 00:33:55.561 Firmware Activate/Download: Not Supported 00:33:55.561 Namespace Management: Not Supported 00:33:55.561 Device Self-Test: Not Supported 00:33:55.561 Directives: Not Supported 00:33:55.561 NVMe-MI: Not Supported 00:33:55.561 Virtualization Management: Not Supported 00:33:55.561 Doorbell Buffer Config: Not Supported 00:33:55.561 Get LBA Status Capability: Not Supported 00:33:55.561 Command & Feature Lockdown Capability: Not Supported 00:33:55.561 Abort Command Limit: 4 00:33:55.561 Async Event Request Limit: 4 00:33:55.561 Number of Firmware Slots: N/A 00:33:55.561 Firmware Slot 1 Read-Only: N/A 00:33:55.561 Firmware Activation Without Reset: N/A 00:33:55.561 Multiple Update Detection Support: N/A 00:33:55.561 Firmware Update Granularity: No Information Provided 00:33:55.561 Per-Namespace SMART Log: Yes 00:33:55.561 Asymmetric Namespace Access Log Page: Supported 00:33:55.561 ANA Transition Time : 10 sec 00:33:55.561 00:33:55.561 Asymmetric Namespace Access Capabilities 00:33:55.561 ANA Optimized State : Supported 00:33:55.561 ANA Non-Optimized State : Supported 00:33:55.561 ANA Inaccessible State : Supported 00:33:55.561 ANA Persistent Loss State : Supported 00:33:55.561 ANA Change State : Supported 00:33:55.561 ANAGRPID is not changed : No 00:33:55.561 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:55.561 00:33:55.561 ANA Group Identifier Maximum : 128 00:33:55.561 Number of ANA Group Identifiers : 128 00:33:55.561 Max Number of Allowed Namespaces : 1024 00:33:55.561 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:55.561 Command Effects Log Page: Supported 00:33:55.561 Get Log Page Extended Data: Supported 00:33:55.561 Telemetry Log Pages: Not Supported 00:33:55.561 Persistent Event Log Pages: Not Supported 00:33:55.561 Supported Log Pages Log Page: May Support 00:33:55.561 Commands Supported & Effects Log Page: Not Supported 00:33:55.561 Feature Identifiers & Effects Log Page:May Support 00:33:55.561 NVMe-MI Commands & Effects Log Page: May Support 00:33:55.561 Data Area 4 for Telemetry Log: Not Supported 00:33:55.561 Error Log Page Entries Supported: 128 00:33:55.561 Keep Alive: Supported 00:33:55.561 Keep Alive Granularity: 1000 ms 00:33:55.561 00:33:55.561 NVM Command Set Attributes 00:33:55.561 ========================== 00:33:55.561 Submission Queue Entry Size 00:33:55.561 Max: 64 00:33:55.561 Min: 64 00:33:55.561 Completion Queue Entry Size 00:33:55.561 Max: 16 00:33:55.561 Min: 16 00:33:55.561 Number of Namespaces: 1024 00:33:55.561 Compare Command: Not Supported 00:33:55.561 Write Uncorrectable Command: Not Supported 00:33:55.561 Dataset Management Command: Supported 00:33:55.561 Write Zeroes Command: Supported 00:33:55.561 Set Features Save Field: Not Supported 00:33:55.561 Reservations: Not Supported 00:33:55.561 Timestamp: Not Supported 00:33:55.561 Copy: Not Supported 00:33:55.561 Volatile Write Cache: Present 00:33:55.561 Atomic Write Unit (Normal): 1 00:33:55.561 Atomic Write Unit (PFail): 1 00:33:55.561 Atomic Compare & Write Unit: 1 00:33:55.561 Fused Compare & Write: Not Supported 00:33:55.561 Scatter-Gather List 00:33:55.561 SGL Command Set: Supported 00:33:55.561 SGL Keyed: Not Supported 00:33:55.561 SGL Bit Bucket Descriptor: Not Supported 00:33:55.561 SGL Metadata Pointer: Not Supported 00:33:55.561 Oversized SGL: Not Supported 00:33:55.561 SGL Metadata Address: Not Supported 00:33:55.561 SGL Offset: Supported 00:33:55.561 Transport SGL Data Block: Not Supported 00:33:55.561 Replay Protected Memory Block: Not Supported 00:33:55.561 00:33:55.561 Firmware Slot Information 00:33:55.561 ========================= 00:33:55.561 Active slot: 0 00:33:55.561 00:33:55.561 Asymmetric Namespace Access 00:33:55.561 =========================== 00:33:55.561 Change Count : 0 00:33:55.561 Number of ANA Group Descriptors : 1 00:33:55.561 ANA Group Descriptor : 0 00:33:55.561 ANA Group ID : 1 00:33:55.561 Number of NSID Values : 1 00:33:55.561 Change Count : 0 00:33:55.561 ANA State : 1 00:33:55.561 Namespace Identifier : 1 00:33:55.561 00:33:55.561 Commands Supported and Effects 00:33:55.561 ============================== 00:33:55.561 Admin Commands 00:33:55.561 -------------- 00:33:55.561 Get Log Page (02h): Supported 00:33:55.561 Identify (06h): Supported 00:33:55.561 Abort (08h): Supported 00:33:55.561 Set Features (09h): Supported 00:33:55.561 Get Features (0Ah): Supported 00:33:55.561 Asynchronous Event Request (0Ch): Supported 00:33:55.561 Keep Alive (18h): Supported 00:33:55.561 I/O Commands 00:33:55.561 ------------ 00:33:55.561 Flush (00h): Supported 00:33:55.561 Write (01h): Supported LBA-Change 00:33:55.561 Read (02h): Supported 00:33:55.561 Write Zeroes (08h): Supported LBA-Change 00:33:55.561 Dataset Management (09h): Supported 00:33:55.561 00:33:55.561 Error Log 00:33:55.561 ========= 00:33:55.561 Entry: 0 00:33:55.561 Error Count: 0x3 00:33:55.561 Submission Queue Id: 0x0 00:33:55.561 Command Id: 0x5 00:33:55.561 Phase Bit: 0 00:33:55.561 Status Code: 0x2 00:33:55.561 Status Code Type: 0x0 00:33:55.561 Do Not Retry: 1 00:33:55.561 Error Location: 0x28 00:33:55.561 LBA: 0x0 00:33:55.561 Namespace: 0x0 00:33:55.561 Vendor Log Page: 0x0 00:33:55.561 ----------- 00:33:55.561 Entry: 1 00:33:55.561 Error Count: 0x2 00:33:55.561 Submission Queue Id: 0x0 00:33:55.561 Command Id: 0x5 00:33:55.561 Phase Bit: 0 00:33:55.561 Status Code: 0x2 00:33:55.561 Status Code Type: 0x0 00:33:55.561 Do Not Retry: 1 00:33:55.561 Error Location: 0x28 00:33:55.561 LBA: 0x0 00:33:55.562 Namespace: 0x0 00:33:55.562 Vendor Log Page: 0x0 00:33:55.562 ----------- 00:33:55.562 Entry: 2 00:33:55.562 Error Count: 0x1 00:33:55.562 Submission Queue Id: 0x0 00:33:55.562 Command Id: 0x4 00:33:55.562 Phase Bit: 0 00:33:55.562 Status Code: 0x2 00:33:55.562 Status Code Type: 0x0 00:33:55.562 Do Not Retry: 1 00:33:55.562 Error Location: 0x28 00:33:55.562 LBA: 0x0 00:33:55.562 Namespace: 0x0 00:33:55.562 Vendor Log Page: 0x0 00:33:55.562 00:33:55.562 Number of Queues 00:33:55.562 ================ 00:33:55.562 Number of I/O Submission Queues: 128 00:33:55.562 Number of I/O Completion Queues: 128 00:33:55.562 00:33:55.562 ZNS Specific Controller Data 00:33:55.562 ============================ 00:33:55.562 Zone Append Size Limit: 0 00:33:55.562 00:33:55.562 00:33:55.562 Active Namespaces 00:33:55.562 ================= 00:33:55.562 get_feature(0x05) failed 00:33:55.562 Namespace ID:1 00:33:55.562 Command Set Identifier: NVM (00h) 00:33:55.562 Deallocate: Supported 00:33:55.562 Deallocated/Unwritten Error: Not Supported 00:33:55.562 Deallocated Read Value: Unknown 00:33:55.562 Deallocate in Write Zeroes: Not Supported 00:33:55.562 Deallocated Guard Field: 0xFFFF 00:33:55.562 Flush: Supported 00:33:55.562 Reservation: Not Supported 00:33:55.562 Namespace Sharing Capabilities: Multiple Controllers 00:33:55.562 Size (in LBAs): 1953525168 (931GiB) 00:33:55.562 Capacity (in LBAs): 1953525168 (931GiB) 00:33:55.562 Utilization (in LBAs): 1953525168 (931GiB) 00:33:55.562 UUID: a023bca7-c4c4-440d-8a6a-9c61858e2c61 00:33:55.562 Thin Provisioning: Not Supported 00:33:55.562 Per-NS Atomic Units: Yes 00:33:55.562 Atomic Boundary Size (Normal): 0 00:33:55.562 Atomic Boundary Size (PFail): 0 00:33:55.562 Atomic Boundary Offset: 0 00:33:55.562 NGUID/EUI64 Never Reused: No 00:33:55.562 ANA group ID: 1 00:33:55.562 Namespace Write Protected: No 00:33:55.562 Number of LBA Formats: 1 00:33:55.562 Current LBA Format: LBA Format #00 00:33:55.562 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:55.562 00:33:55.562 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:55.562 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:55.562 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:33:55.562 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:55.562 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:33:55.562 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:55.562 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:55.562 rmmod nvme_tcp 00:33:55.562 rmmod nvme_fabrics 00:33:55.822 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:55.822 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:33:55.822 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:33:55.823 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:55.823 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:55.823 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:55.823 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:55.823 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:55.823 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:55.823 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.823 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:55.823 16:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.734 16:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:57.734 16:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:57.734 16:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:57.734 16:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:33:57.734 16:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:57.734 16:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:57.734 16:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:57.734 16:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:57.734 16:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:57.734 16:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:57.734 16:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:59.113 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:59.113 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:59.113 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:59.113 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:59.113 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:59.113 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:59.113 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:59.113 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:59.113 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:59.113 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:59.113 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:59.113 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:59.113 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:59.113 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:59.113 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:59.113 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:00.052 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:00.052 00:34:00.052 real 0m9.531s 00:34:00.052 user 0m2.015s 00:34:00.052 sys 0m3.480s 00:34:00.052 16:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:00.052 16:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:00.052 ************************************ 00:34:00.052 END TEST nvmf_identify_kernel_target 00:34:00.052 ************************************ 00:34:00.052 16:39:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:00.052 16:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:00.052 16:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:00.052 16:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.052 ************************************ 00:34:00.052 START TEST nvmf_auth_host 00:34:00.052 ************************************ 00:34:00.052 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:00.313 * Looking for test storage... 00:34:00.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:00.313 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:34:00.314 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:02.221 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:02.221 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:02.221 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:02.221 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:02.221 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:02.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:02.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:34:02.222 00:34:02.222 --- 10.0.0.2 ping statistics --- 00:34:02.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:02.222 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:02.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:02.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:34:02.222 00:34:02.222 --- 10.0.0.1 ping statistics --- 00:34:02.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:02.222 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=800896 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 800896 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 800896 ']' 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:02.222 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.157 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=736645cba2d0884222f99c6dc8b6f681 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.qEv 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 736645cba2d0884222f99c6dc8b6f681 0 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 736645cba2d0884222f99c6dc8b6f681 0 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=736645cba2d0884222f99c6dc8b6f681 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.qEv 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.qEv 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.qEv 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=afc4cc0223c926c98b6c35edb7630f839bc546286d6189ae2da20f7aa58f0aae 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.m3g 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key afc4cc0223c926c98b6c35edb7630f839bc546286d6189ae2da20f7aa58f0aae 3 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 afc4cc0223c926c98b6c35edb7630f839bc546286d6189ae2da20f7aa58f0aae 3 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=afc4cc0223c926c98b6c35edb7630f839bc546286d6189ae2da20f7aa58f0aae 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:34:03.158 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.m3g 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.m3g 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.m3g 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d17b7459a938602289d96fdc142c2f758c67db6f3178bb87 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Jxn 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d17b7459a938602289d96fdc142c2f758c67db6f3178bb87 0 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d17b7459a938602289d96fdc142c2f758c67db6f3178bb87 0 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d17b7459a938602289d96fdc142c2f758c67db6f3178bb87 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Jxn 00:34:03.419 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Jxn 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Jxn 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=acc4f77006503ec49e7ac2bb19533cdc455250eb7d06be18 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.tFY 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key acc4f77006503ec49e7ac2bb19533cdc455250eb7d06be18 2 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 acc4f77006503ec49e7ac2bb19533cdc455250eb7d06be18 2 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=acc4f77006503ec49e7ac2bb19533cdc455250eb7d06be18 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:34:03.420 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.tFY 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.tFY 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.tFY 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a564f1cfb793fe6c0e1ff14fe56be73e 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.kEh 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a564f1cfb793fe6c0e1ff14fe56be73e 1 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a564f1cfb793fe6c0e1ff14fe56be73e 1 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a564f1cfb793fe6c0e1ff14fe56be73e 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.kEh 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.kEh 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.kEh 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cd41504559c17aa7b99b073dc14ea615 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6MS 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cd41504559c17aa7b99b073dc14ea615 1 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cd41504559c17aa7b99b073dc14ea615 1 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cd41504559c17aa7b99b073dc14ea615 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6MS 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6MS 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.6MS 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6ed853f9af3197912819c6ff60658cd701452837f6e64fcd 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zFq 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6ed853f9af3197912819c6ff60658cd701452837f6e64fcd 2 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6ed853f9af3197912819c6ff60658cd701452837f6e64fcd 2 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6ed853f9af3197912819c6ff60658cd701452837f6e64fcd 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zFq 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zFq 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.zFq 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ff9f0c1f92dea0660c03855c4930bbe9 00:34:03.420 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.dLv 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ff9f0c1f92dea0660c03855c4930bbe9 0 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ff9f0c1f92dea0660c03855c4930bbe9 0 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ff9f0c1f92dea0660c03855c4930bbe9 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.dLv 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.dLv 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.dLv 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8089397b0343f5ee786f8b9bad7993062f6971c00d31edd9f38457f782de6a71 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.YaQ 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8089397b0343f5ee786f8b9bad7993062f6971c00d31edd9f38457f782de6a71 3 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8089397b0343f5ee786f8b9bad7993062f6971c00d31edd9f38457f782de6a71 3 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8089397b0343f5ee786f8b9bad7993062f6971c00d31edd9f38457f782de6a71 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.YaQ 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.YaQ 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.YaQ 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 800896 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 800896 ']' 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:03.683 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:03.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:03.684 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:03.684 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qEv 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.m3g ]] 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.m3g 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Jxn 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.tFY ]] 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tFY 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.kEh 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.6MS ]] 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6MS 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.zFq 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.dLv ]] 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.dLv 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.YaQ 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:03.974 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:05.351 Waiting for block devices as requested 00:34:05.351 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:05.351 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:05.351 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:05.609 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:05.609 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:05.609 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:05.609 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:05.869 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:05.869 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:05.869 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:06.127 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:06.127 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:06.127 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:06.127 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:06.386 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:06.386 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:06.386 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:06.954 No valid GPT data, bailing 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:06.954 00:34:06.954 Discovery Log Number of Records 2, Generation counter 2 00:34:06.954 =====Discovery Log Entry 0====== 00:34:06.954 trtype: tcp 00:34:06.954 adrfam: ipv4 00:34:06.954 subtype: current discovery subsystem 00:34:06.954 treq: not specified, sq flow control disable supported 00:34:06.954 portid: 1 00:34:06.954 trsvcid: 4420 00:34:06.954 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:06.954 traddr: 10.0.0.1 00:34:06.954 eflags: none 00:34:06.954 sectype: none 00:34:06.954 =====Discovery Log Entry 1====== 00:34:06.954 trtype: tcp 00:34:06.954 adrfam: ipv4 00:34:06.954 subtype: nvme subsystem 00:34:06.954 treq: not specified, sq flow control disable supported 00:34:06.954 portid: 1 00:34:06.954 trsvcid: 4420 00:34:06.954 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:06.954 traddr: 10.0.0.1 00:34:06.954 eflags: none 00:34:06.954 sectype: none 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:06.954 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:06.955 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:06.955 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:06.955 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.955 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.213 nvme0n1 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: ]] 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.213 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.471 nvme0n1 00:34:07.471 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.471 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.471 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.471 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.472 nvme0n1 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.472 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: ]] 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.730 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:07.731 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:07.731 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.731 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.731 nvme0n1 00:34:07.731 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.731 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.731 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.731 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.731 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.731 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.731 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.731 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.731 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.731 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: ]] 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.990 nvme0n1 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.990 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.250 nvme0n1 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.250 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: ]] 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.251 16:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.510 nvme0n1 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.510 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.769 nvme0n1 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: ]] 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.769 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.027 nvme0n1 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: ]] 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.027 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.286 nvme0n1 00:34:09.286 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.286 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.286 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.286 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.286 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.286 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.286 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.286 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.286 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.286 16:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.286 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.545 nvme0n1 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: ]] 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.545 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.113 nvme0n1 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.113 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.114 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.114 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.114 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.114 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.114 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.114 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.114 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:10.114 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.114 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.374 nvme0n1 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: ]] 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.374 16:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.374 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.374 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.374 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.374 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.374 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.374 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.374 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.374 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.374 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.374 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.374 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.374 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.374 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:10.374 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.374 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.633 nvme0n1 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: ]] 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.633 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.634 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.634 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.634 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.634 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.634 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.634 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.634 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.634 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:10.634 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.634 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.199 nvme0n1 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:11.199 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.200 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.457 nvme0n1 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: ]] 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.458 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.048 nvme0n1 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.048 16:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.616 nvme0n1 00:34:12.616 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.616 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.616 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.616 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.616 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.616 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.616 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: ]] 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.617 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.185 nvme0n1 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: ]] 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:13.185 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.186 16:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.754 nvme0n1 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.754 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.755 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.755 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.755 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.755 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.755 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.755 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:13.755 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.755 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:13.755 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:13.755 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:13.755 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:13.755 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.755 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.322 nvme0n1 00:34:14.322 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.322 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.322 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.322 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.322 16:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.322 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.322 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.322 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.322 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.322 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.322 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.322 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:14.322 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.322 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:14.322 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.322 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.322 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:14.322 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:14.322 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:14.322 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: ]] 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.323 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.702 nvme0n1 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.702 16:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.637 nvme0n1 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: ]] 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.637 16:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.573 nvme0n1 00:34:17.573 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.573 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.573 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.573 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.573 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.573 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: ]] 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.574 16:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.512 nvme0n1 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.512 16:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.455 nvme0n1 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: ]] 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:19.455 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.456 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.715 nvme0n1 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:19.715 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.716 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:19.716 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:19.716 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:19.716 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:19.716 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.716 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.975 nvme0n1 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: ]] 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.975 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.234 nvme0n1 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: ]] 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.234 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.235 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:20.235 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.235 16:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.493 nvme0n1 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.493 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.494 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.494 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.494 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.494 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.494 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.494 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.494 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.494 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.494 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:20.494 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.494 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.494 nvme0n1 00:34:20.494 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: ]] 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.755 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.014 nvme0n1 00:34:21.014 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.015 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.273 nvme0n1 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: ]] 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.273 16:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.532 nvme0n1 00:34:21.532 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.532 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.532 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.532 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.532 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.532 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.532 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.532 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.532 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.532 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.532 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.532 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.532 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: ]] 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.533 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.791 nvme0n1 00:34:21.791 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.792 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.050 nvme0n1 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: ]] 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.050 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.308 nvme0n1 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.308 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.309 16:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.309 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.309 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.309 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.309 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.309 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.309 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.309 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.309 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.309 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.309 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.309 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.309 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.309 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.309 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.309 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.567 nvme0n1 00:34:22.567 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.567 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.567 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.567 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.567 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.567 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.825 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.825 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.825 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.825 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.825 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.825 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.825 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:22.825 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.825 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: ]] 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.826 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.085 nvme0n1 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: ]] 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.085 16:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.345 nvme0n1 00:34:23.345 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.345 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.345 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.345 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.345 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.345 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.345 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.345 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.345 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.345 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.345 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.345 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.346 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.914 nvme0n1 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: ]] 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.914 16:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.482 nvme0n1 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.482 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.049 nvme0n1 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: ]] 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.049 16:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.619 nvme0n1 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: ]] 00:34:25.619 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.620 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.189 nvme0n1 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.189 16:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.755 nvme0n1 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.755 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: ]] 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.756 16:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.690 nvme0n1 00:34:27.690 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.690 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.690 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.690 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.690 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.950 16:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.888 nvme0n1 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: ]] 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.888 16:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.828 nvme0n1 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: ]] 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.828 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.829 16:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.765 nvme0n1 00:34:30.765 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.765 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.765 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.765 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.765 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.765 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.765 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.765 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.765 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.765 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.765 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.765 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.765 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:30.765 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.765 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.766 16:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.705 nvme0n1 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: ]] 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.705 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.964 nvme0n1 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.964 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.965 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.224 nvme0n1 00:34:32.224 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.224 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.224 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.224 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.224 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.224 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.224 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.224 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.224 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.224 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.224 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.224 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.224 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: ]] 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.225 16:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.485 nvme0n1 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: ]] 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.485 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.744 nvme0n1 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:32.744 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.745 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.002 nvme0n1 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: ]] 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.002 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.259 nvme0n1 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.259 16:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.516 nvme0n1 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.516 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: ]] 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.517 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.774 nvme0n1 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: ]] 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.774 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.031 nvme0n1 00:34:34.031 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.031 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.031 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.031 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.031 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.031 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.031 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.031 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.031 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.031 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.031 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.031 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.032 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.291 nvme0n1 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: ]] 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.291 16:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.551 nvme0n1 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:34.551 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.552 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.121 nvme0n1 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: ]] 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:35.121 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:35.122 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:35.122 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.122 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.382 nvme0n1 00:34:35.382 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.382 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.382 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.382 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.382 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.382 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.382 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.382 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.382 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.382 16:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: ]] 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.382 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.642 nvme0n1 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.642 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.643 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.901 nvme0n1 00:34:35.901 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.901 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.901 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.901 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.901 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.901 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: ]] 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.161 16:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.766 nvme0n1 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:36.766 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.767 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.337 nvme0n1 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: ]] 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.337 16:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.904 nvme0n1 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:37.904 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: ]] 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.905 16:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.471 nvme0n1 00:34:38.471 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.471 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.471 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.471 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.471 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.471 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.471 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.471 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.472 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.042 nvme0n1 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2NjQ1Y2JhMmQwODg0MjIyZjk5YzZkYzhiNmY2ODFEJl3u: 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: ]] 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjNGNjMDIyM2M5MjZjOThiNmMzNWVkYjc2MzBmODM5YmM1NDYyODZkNjE4OWFlMmRhMjBmN2FhNThmMGFhZdH9aPo=: 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:39.042 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.043 16:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.421 nvme0n1 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:40.421 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.422 16:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.360 nvme0n1 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTU2NGYxY2ZiNzkzZmU2YzBlMWZmMTRmZTU2YmU3M2VlPNNf: 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: ]] 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2Q0MTUwNDU1OWMxN2FhN2I5OWIwNzNkYzE0ZWE2MTV0XtSk: 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.360 16:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.297 nvme0n1 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVkODUzZjlhZjMxOTc5MTI4MTljNmZmNjA2NThjZDcwMTQ1MjgzN2Y2ZTY0ZmNkwLyeCw==: 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: ]] 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmY5ZjBjMWY5MmRlYTA2NjBjMDM4NTVjNDkzMGJiZTnMWn52: 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.297 16:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.235 nvme0n1 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA4OTM5N2IwMzQzZjVlZTc4NmY4YjliYWQ3OTkzMDYyZjY5NzFjMDBkMzFlZGQ5ZjM4NDU3Zjc4MmRlNmE3MTxzJMs=: 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.235 16:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.171 nvme0n1 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE3Yjc0NTlhOTM4NjAyMjg5ZDk2ZmRjMTQyYzJmNzU4YzY3ZGI2ZjMxNzhiYjg3B7nztw==: 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: ]] 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWNjNGY3NzAwNjUwM2VjNDllN2FjMmJiMTk1MzNjZGM0NTUyNTBlYjdkMDZiZTE4n/lN3w==: 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.171 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.172 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:44.172 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.172 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:44.172 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:44.172 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:44.172 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:44.172 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:44.172 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:44.172 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:44.172 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:44.172 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:44.172 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:44.172 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:44.172 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.172 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.431 request: 00:34:44.431 { 00:34:44.431 "name": "nvme0", 00:34:44.431 "trtype": "tcp", 00:34:44.431 "traddr": "10.0.0.1", 00:34:44.431 "adrfam": "ipv4", 00:34:44.431 "trsvcid": "4420", 00:34:44.431 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:44.431 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:44.431 "prchk_reftag": false, 00:34:44.431 "prchk_guard": false, 00:34:44.431 "hdgst": false, 00:34:44.431 "ddgst": false, 00:34:44.431 "method": "bdev_nvme_attach_controller", 00:34:44.431 "req_id": 1 00:34:44.431 } 00:34:44.431 Got JSON-RPC error response 00:34:44.431 response: 00:34:44.431 { 00:34:44.432 "code": -5, 00:34:44.432 "message": "Input/output error" 00:34:44.432 } 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.432 16:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.432 request: 00:34:44.432 { 00:34:44.432 "name": "nvme0", 00:34:44.432 "trtype": "tcp", 00:34:44.432 "traddr": "10.0.0.1", 00:34:44.432 "adrfam": "ipv4", 00:34:44.432 "trsvcid": "4420", 00:34:44.432 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:44.432 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:44.432 "prchk_reftag": false, 00:34:44.432 "prchk_guard": false, 00:34:44.432 "hdgst": false, 00:34:44.432 "ddgst": false, 00:34:44.432 "dhchap_key": "key2", 00:34:44.432 "method": "bdev_nvme_attach_controller", 00:34:44.432 "req_id": 1 00:34:44.432 } 00:34:44.432 Got JSON-RPC error response 00:34:44.432 response: 00:34:44.432 { 00:34:44.432 "code": -5, 00:34:44.432 "message": "Input/output error" 00:34:44.432 } 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.432 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.691 request: 00:34:44.691 { 00:34:44.691 "name": "nvme0", 00:34:44.691 "trtype": "tcp", 00:34:44.691 "traddr": "10.0.0.1", 00:34:44.691 "adrfam": "ipv4", 00:34:44.691 "trsvcid": "4420", 00:34:44.691 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:44.691 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:44.691 "prchk_reftag": false, 00:34:44.691 "prchk_guard": false, 00:34:44.691 "hdgst": false, 00:34:44.691 "ddgst": false, 00:34:44.691 "dhchap_key": "key1", 00:34:44.691 "dhchap_ctrlr_key": "ckey2", 00:34:44.691 "method": "bdev_nvme_attach_controller", 00:34:44.691 "req_id": 1 00:34:44.691 } 00:34:44.691 Got JSON-RPC error response 00:34:44.691 response: 00:34:44.691 { 00:34:44.691 "code": -5, 00:34:44.691 "message": "Input/output error" 00:34:44.691 } 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:44.691 rmmod nvme_tcp 00:34:44.691 rmmod nvme_fabrics 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 800896 ']' 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 800896 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 800896 ']' 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 800896 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 800896 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 800896' 00:34:44.691 killing process with pid 800896 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 800896 00:34:44.691 16:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 800896 00:34:45.628 16:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:45.628 16:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:45.628 16:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:45.628 16:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:45.628 16:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:45.628 16:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.628 16:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:45.628 16:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.529 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:47.787 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:47.787 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:47.787 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:47.787 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:47.787 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:34:47.787 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:47.787 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:47.787 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:47.787 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:47.787 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:47.787 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:47.787 16:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:49.163 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:49.163 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:49.163 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:49.163 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:49.163 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:49.163 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:49.163 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:49.163 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:49.163 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:49.163 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:49.163 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:49.163 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:49.163 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:49.163 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:49.163 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:49.163 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:50.102 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:50.102 16:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.qEv /tmp/spdk.key-null.Jxn /tmp/spdk.key-sha256.kEh /tmp/spdk.key-sha384.zFq /tmp/spdk.key-sha512.YaQ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:50.102 16:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:51.478 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:51.478 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:51.478 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:51.478 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:51.478 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:51.478 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:51.478 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:51.478 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:51.478 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:51.478 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:51.478 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:51.478 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:51.478 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:51.478 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:51.478 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:51.478 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:51.478 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:51.478 00:34:51.478 real 0m51.247s 00:34:51.478 user 0m49.073s 00:34:51.478 sys 0m5.899s 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.478 ************************************ 00:34:51.478 END TEST nvmf_auth_host 00:34:51.478 ************************************ 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.478 ************************************ 00:34:51.478 START TEST nvmf_digest 00:34:51.478 ************************************ 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:51.478 * Looking for test storage... 00:34:51.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.478 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:51.479 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:51.479 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:34:51.479 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:53.381 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:53.381 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:53.381 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:53.382 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:53.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:53.382 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:53.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:53.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:34:53.382 00:34:53.382 --- 10.0.0.2 ping statistics --- 00:34:53.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.382 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:53.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:53.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:34:53.382 00:34:53.382 --- 10.0.0.1 ping statistics --- 00:34:53.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.382 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:53.382 ************************************ 00:34:53.382 START TEST nvmf_digest_clean 00:34:53.382 ************************************ 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=810514 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 810514 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 810514 ']' 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:53.382 16:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:53.641 [2024-07-26 16:40:13.170743] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:53.641 [2024-07-26 16:40:13.170888] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:53.641 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.641 [2024-07-26 16:40:13.304278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.900 [2024-07-26 16:40:13.525266] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:53.900 [2024-07-26 16:40:13.525336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:53.900 [2024-07-26 16:40:13.525360] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:53.900 [2024-07-26 16:40:13.525396] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:53.900 [2024-07-26 16:40:13.525414] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:53.900 [2024-07-26 16:40:13.525460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.464 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:54.464 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:54.464 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:54.464 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:54.464 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:54.464 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:54.464 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:54.464 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:54.464 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:54.464 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.464 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:55.039 null0 00:34:55.039 [2024-07-26 16:40:14.526447] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:55.039 [2024-07-26 16:40:14.550707] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=810739 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 810739 /var/tmp/bperf.sock 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 810739 ']' 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:55.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:55.039 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:55.039 [2024-07-26 16:40:14.642256] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:55.039 [2024-07-26 16:40:14.642420] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810739 ] 00:34:55.039 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.039 [2024-07-26 16:40:14.787057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.309 [2024-07-26 16:40:15.054005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.874 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:55.874 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:55.874 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:55.874 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:55.874 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:56.442 16:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.442 16:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:57.007 nvme0n1 00:34:57.007 16:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:57.007 16:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:57.267 Running I/O for 2 seconds... 00:34:59.174 00:34:59.174 Latency(us) 00:34:59.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.174 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:59.174 nvme0n1 : 2.01 14046.59 54.87 0.00 0.00 9094.93 4733.16 19709.35 00:34:59.174 =================================================================================================================== 00:34:59.174 Total : 14046.59 54.87 0.00 0.00 9094.93 4733.16 19709.35 00:34:59.174 0 00:34:59.174 16:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:59.174 16:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:59.174 16:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:59.174 16:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:59.174 | select(.opcode=="crc32c") 00:34:59.174 | "\(.module_name) \(.executed)"' 00:34:59.174 16:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:59.434 16:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:59.434 16:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:59.434 16:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:59.434 16:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:59.434 16:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 810739 00:34:59.434 16:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 810739 ']' 00:34:59.434 16:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 810739 00:34:59.434 16:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:59.434 16:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:59.434 16:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 810739 00:34:59.434 16:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:59.434 16:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:59.434 16:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 810739' 00:34:59.434 killing process with pid 810739 00:34:59.434 16:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 810739 00:34:59.434 Received shutdown signal, test time was about 2.000000 seconds 00:34:59.434 00:34:59.434 Latency(us) 00:34:59.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.434 =================================================================================================================== 00:34:59.434 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:59.434 16:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 810739 00:35:00.812 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:00.812 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:00.812 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:00.812 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:00.812 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:00.812 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:00.813 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:00.813 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=811411 00:35:00.813 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 811411 /var/tmp/bperf.sock 00:35:00.813 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:00.813 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 811411 ']' 00:35:00.813 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:00.813 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:00.813 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:00.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:00.813 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:00.813 16:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:00.813 [2024-07-26 16:40:20.301446] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:00.813 [2024-07-26 16:40:20.301588] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid811411 ] 00:35:00.813 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:00.813 Zero copy mechanism will not be used. 00:35:00.813 EAL: No free 2048 kB hugepages reported on node 1 00:35:00.813 [2024-07-26 16:40:20.421809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.070 [2024-07-26 16:40:20.678027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.634 16:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:01.634 16:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:01.634 16:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:01.634 16:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:01.634 16:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:02.197 16:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:02.197 16:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:02.455 nvme0n1 00:35:02.455 16:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:02.455 16:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:02.712 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:02.712 Zero copy mechanism will not be used. 00:35:02.712 Running I/O for 2 seconds... 00:35:04.611 00:35:04.611 Latency(us) 00:35:04.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.611 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:04.611 nvme0n1 : 2.00 2063.37 257.92 0.00 0.00 7746.36 2949.12 13010.11 00:35:04.611 =================================================================================================================== 00:35:04.611 Total : 2063.37 257.92 0.00 0.00 7746.36 2949.12 13010.11 00:35:04.611 0 00:35:04.611 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:04.611 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:04.611 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:04.611 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:04.611 | select(.opcode=="crc32c") 00:35:04.611 | "\(.module_name) \(.executed)"' 00:35:04.611 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:04.868 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:04.868 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:04.868 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:04.868 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:04.868 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 811411 00:35:04.868 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 811411 ']' 00:35:04.868 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 811411 00:35:04.868 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:04.868 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:04.868 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 811411 00:35:05.126 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:05.126 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:05.126 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 811411' 00:35:05.126 killing process with pid 811411 00:35:05.126 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 811411 00:35:05.126 Received shutdown signal, test time was about 2.000000 seconds 00:35:05.126 00:35:05.126 Latency(us) 00:35:05.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.126 =================================================================================================================== 00:35:05.126 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:05.126 16:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 811411 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=811962 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 811962 /var/tmp/bperf.sock 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 811962 ']' 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:06.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:06.059 16:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:06.059 [2024-07-26 16:40:25.665711] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:06.059 [2024-07-26 16:40:25.665905] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid811962 ] 00:35:06.059 EAL: No free 2048 kB hugepages reported on node 1 00:35:06.059 [2024-07-26 16:40:25.801113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.316 [2024-07-26 16:40:26.058989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:06.881 16:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:06.881 16:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:06.881 16:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:06.881 16:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:06.881 16:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:07.815 16:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:07.815 16:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:07.815 nvme0n1 00:35:07.815 16:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:07.815 16:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:08.073 Running I/O for 2 seconds... 00:35:09.973 00:35:09.973 Latency(us) 00:35:09.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.973 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:09.973 nvme0n1 : 2.01 14208.11 55.50 0.00 0.00 8982.65 5218.61 14369.37 00:35:09.973 =================================================================================================================== 00:35:09.973 Total : 14208.11 55.50 0.00 0.00 8982.65 5218.61 14369.37 00:35:09.973 0 00:35:09.973 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:09.973 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:09.973 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:09.973 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:09.973 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:09.973 | select(.opcode=="crc32c") 00:35:09.973 | "\(.module_name) \(.executed)"' 00:35:10.231 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:10.231 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:10.231 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:10.231 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:10.231 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 811962 00:35:10.231 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 811962 ']' 00:35:10.231 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 811962 00:35:10.231 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:10.231 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:10.231 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 811962 00:35:10.231 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:10.231 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:10.231 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 811962' 00:35:10.231 killing process with pid 811962 00:35:10.231 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 811962 00:35:10.231 Received shutdown signal, test time was about 2.000000 seconds 00:35:10.231 00:35:10.231 Latency(us) 00:35:10.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.231 =================================================================================================================== 00:35:10.231 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:10.231 16:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 811962 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=812614 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 812614 /var/tmp/bperf.sock 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 812614 ']' 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:11.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:11.606 16:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:11.606 [2024-07-26 16:40:31.121320] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:11.606 [2024-07-26 16:40:31.121478] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid812614 ] 00:35:11.606 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:11.606 Zero copy mechanism will not be used. 00:35:11.606 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.606 [2024-07-26 16:40:31.246772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.864 [2024-07-26 16:40:31.481827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.430 16:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:12.430 16:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:12.430 16:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:12.430 16:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:12.430 16:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:12.997 16:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.997 16:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:13.587 nvme0n1 00:35:13.587 16:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:13.587 16:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:13.587 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:13.587 Zero copy mechanism will not be used. 00:35:13.587 Running I/O for 2 seconds... 00:35:16.116 00:35:16.116 Latency(us) 00:35:16.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.116 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:16.116 nvme0n1 : 2.01 2562.65 320.33 0.00 0.00 6226.31 3179.71 9320.68 00:35:16.116 =================================================================================================================== 00:35:16.116 Total : 2562.65 320.33 0.00 0.00 6226.31 3179.71 9320.68 00:35:16.116 0 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:16.116 | select(.opcode=="crc32c") 00:35:16.116 | "\(.module_name) \(.executed)"' 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 812614 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 812614 ']' 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 812614 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 812614 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 812614' 00:35:16.116 killing process with pid 812614 00:35:16.116 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 812614 00:35:16.117 Received shutdown signal, test time was about 2.000000 seconds 00:35:16.117 00:35:16.117 Latency(us) 00:35:16.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.117 =================================================================================================================== 00:35:16.117 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:16.117 16:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 812614 00:35:17.052 16:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 810514 00:35:17.052 16:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 810514 ']' 00:35:17.052 16:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 810514 00:35:17.052 16:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:17.052 16:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:17.052 16:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 810514 00:35:17.052 16:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:17.052 16:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:17.052 16:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 810514' 00:35:17.052 killing process with pid 810514 00:35:17.052 16:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 810514 00:35:17.052 16:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 810514 00:35:18.426 00:35:18.426 real 0m24.896s 00:35:18.426 user 0m47.156s 00:35:18.426 sys 0m4.725s 00:35:18.426 16:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:18.426 16:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:18.426 ************************************ 00:35:18.426 END TEST nvmf_digest_clean 00:35:18.426 ************************************ 00:35:18.426 16:40:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:18.426 ************************************ 00:35:18.426 START TEST nvmf_digest_error 00:35:18.426 ************************************ 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=813444 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 813444 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 813444 ']' 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:18.426 16:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.426 [2024-07-26 16:40:38.116943] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:18.426 [2024-07-26 16:40:38.117100] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:18.684 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.684 [2024-07-26 16:40:38.252851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.942 [2024-07-26 16:40:38.500142] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:18.942 [2024-07-26 16:40:38.500214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:18.942 [2024-07-26 16:40:38.500251] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:18.942 [2024-07-26 16:40:38.500276] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:18.942 [2024-07-26 16:40:38.500297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:18.942 [2024-07-26 16:40:38.500342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.507 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:19.507 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:19.507 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:19.507 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:19.507 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.507 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.507 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:19.507 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.507 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.507 [2024-07-26 16:40:39.066642] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:19.507 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.507 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:19.507 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:19.507 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.507 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.765 null0 00:35:19.765 [2024-07-26 16:40:39.447944] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.765 [2024-07-26 16:40:39.472239] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.765 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.765 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:19.765 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:19.765 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:19.766 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:19.766 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:19.766 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=813714 00:35:19.766 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:19.766 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 813714 /var/tmp/bperf.sock 00:35:19.766 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 813714 ']' 00:35:19.766 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:19.766 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:19.766 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:19.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:19.766 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:19.766 16:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.024 [2024-07-26 16:40:39.566765] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:20.024 [2024-07-26 16:40:39.566926] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813714 ] 00:35:20.024 EAL: No free 2048 kB hugepages reported on node 1 00:35:20.024 [2024-07-26 16:40:39.701318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.282 [2024-07-26 16:40:39.959436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.849 16:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:20.849 16:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:20.849 16:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:20.849 16:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:21.107 16:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:21.107 16:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.107 16:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:21.107 16:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.107 16:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:21.107 16:40:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:21.674 nvme0n1 00:35:21.674 16:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:21.674 16:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.674 16:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:21.674 16:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.674 16:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:21.674 16:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:21.674 Running I/O for 2 seconds... 00:35:21.674 [2024-07-26 16:40:41.411056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:21.674 [2024-07-26 16:40:41.411171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.674 [2024-07-26 16:40:41.411200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.674 [2024-07-26 16:40:41.435488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:21.674 [2024-07-26 16:40:41.435540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.674 [2024-07-26 16:40:41.435570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.933 [2024-07-26 16:40:41.458788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:21.933 [2024-07-26 16:40:41.458839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.933 [2024-07-26 16:40:41.458870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.933 [2024-07-26 16:40:41.480750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:21.933 [2024-07-26 16:40:41.480802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.933 [2024-07-26 16:40:41.480831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.933 [2024-07-26 16:40:41.499131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:21.933 [2024-07-26 16:40:41.499177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.933 [2024-07-26 16:40:41.499204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.933 [2024-07-26 16:40:41.514871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:21.933 [2024-07-26 16:40:41.514920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.933 [2024-07-26 16:40:41.514950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.933 [2024-07-26 16:40:41.532164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:21.933 [2024-07-26 16:40:41.532206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.933 [2024-07-26 16:40:41.532231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.933 [2024-07-26 16:40:41.551932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:21.933 [2024-07-26 16:40:41.551982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.933 [2024-07-26 16:40:41.552011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.933 [2024-07-26 16:40:41.572179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:21.933 [2024-07-26 16:40:41.572223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.933 [2024-07-26 16:40:41.572249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.933 [2024-07-26 16:40:41.588718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:21.933 [2024-07-26 16:40:41.588775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.933 [2024-07-26 16:40:41.588804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.933 [2024-07-26 16:40:41.609255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:21.933 [2024-07-26 16:40:41.609315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.933 [2024-07-26 16:40:41.609341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.933 [2024-07-26 16:40:41.631917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:21.933 [2024-07-26 16:40:41.631988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.933 [2024-07-26 16:40:41.632024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.933 [2024-07-26 16:40:41.647203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:21.933 [2024-07-26 16:40:41.647245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.933 [2024-07-26 16:40:41.647270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.933 [2024-07-26 16:40:41.668803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:21.934 [2024-07-26 16:40:41.668853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.934 [2024-07-26 16:40:41.668881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.934 [2024-07-26 16:40:41.692543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:21.934 [2024-07-26 16:40:41.692595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.934 [2024-07-26 16:40:41.692626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.192 [2024-07-26 16:40:41.713537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.192 [2024-07-26 16:40:41.713588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.192 [2024-07-26 16:40:41.713617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.192 [2024-07-26 16:40:41.730658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.192 [2024-07-26 16:40:41.730706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.192 [2024-07-26 16:40:41.730736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.192 [2024-07-26 16:40:41.750216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.192 [2024-07-26 16:40:41.750258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.192 [2024-07-26 16:40:41.750282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.192 [2024-07-26 16:40:41.771199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.192 [2024-07-26 16:40:41.771245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.192 [2024-07-26 16:40:41.771272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.192 [2024-07-26 16:40:41.788285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.192 [2024-07-26 16:40:41.788342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.192 [2024-07-26 16:40:41.788368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.192 [2024-07-26 16:40:41.806822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.192 [2024-07-26 16:40:41.806871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.192 [2024-07-26 16:40:41.806901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.192 [2024-07-26 16:40:41.823118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.192 [2024-07-26 16:40:41.823158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.192 [2024-07-26 16:40:41.823182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.192 [2024-07-26 16:40:41.842894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.192 [2024-07-26 16:40:41.842944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.192 [2024-07-26 16:40:41.842973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.192 [2024-07-26 16:40:41.864035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.192 [2024-07-26 16:40:41.864093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.192 [2024-07-26 16:40:41.864137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.192 [2024-07-26 16:40:41.881068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.192 [2024-07-26 16:40:41.881128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.193 [2024-07-26 16:40:41.881153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.193 [2024-07-26 16:40:41.899422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.193 [2024-07-26 16:40:41.899470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.193 [2024-07-26 16:40:41.899499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.193 [2024-07-26 16:40:41.921621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.193 [2024-07-26 16:40:41.921677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.193 [2024-07-26 16:40:41.921708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.193 [2024-07-26 16:40:41.940143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.193 [2024-07-26 16:40:41.940187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.193 [2024-07-26 16:40:41.940214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.451 [2024-07-26 16:40:41.956184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.451 [2024-07-26 16:40:41.956228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.451 [2024-07-26 16:40:41.956255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.451 [2024-07-26 16:40:41.975745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.451 [2024-07-26 16:40:41.975795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.451 [2024-07-26 16:40:41.975825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.451 [2024-07-26 16:40:41.997172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.451 [2024-07-26 16:40:41.997214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.451 [2024-07-26 16:40:41.997254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.451 [2024-07-26 16:40:42.016723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.451 [2024-07-26 16:40:42.016772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.451 [2024-07-26 16:40:42.016802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.451 [2024-07-26 16:40:42.032523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.451 [2024-07-26 16:40:42.032572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.451 [2024-07-26 16:40:42.032602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.451 [2024-07-26 16:40:42.053260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.451 [2024-07-26 16:40:42.053305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.451 [2024-07-26 16:40:42.053330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.451 [2024-07-26 16:40:42.073699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.452 [2024-07-26 16:40:42.073748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.452 [2024-07-26 16:40:42.073779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.452 [2024-07-26 16:40:42.090943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.452 [2024-07-26 16:40:42.090992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.452 [2024-07-26 16:40:42.091021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.452 [2024-07-26 16:40:42.107946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.452 [2024-07-26 16:40:42.107999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.452 [2024-07-26 16:40:42.108029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.452 [2024-07-26 16:40:42.127853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.452 [2024-07-26 16:40:42.127903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.452 [2024-07-26 16:40:42.127932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.452 [2024-07-26 16:40:42.143335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.452 [2024-07-26 16:40:42.143377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.452 [2024-07-26 16:40:42.143420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.452 [2024-07-26 16:40:42.162308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.452 [2024-07-26 16:40:42.162370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.452 [2024-07-26 16:40:42.162399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.452 [2024-07-26 16:40:42.185426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.452 [2024-07-26 16:40:42.185477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.452 [2024-07-26 16:40:42.185506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.452 [2024-07-26 16:40:42.205585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.452 [2024-07-26 16:40:42.205636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.452 [2024-07-26 16:40:42.205665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.711 [2024-07-26 16:40:42.222698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.711 [2024-07-26 16:40:42.222749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.711 [2024-07-26 16:40:42.222779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.711 [2024-07-26 16:40:42.244947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.711 [2024-07-26 16:40:42.245005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.711 [2024-07-26 16:40:42.245035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.711 [2024-07-26 16:40:42.263497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.711 [2024-07-26 16:40:42.263547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.711 [2024-07-26 16:40:42.263575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.711 [2024-07-26 16:40:42.279220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.711 [2024-07-26 16:40:42.279260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.711 [2024-07-26 16:40:42.279284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.711 [2024-07-26 16:40:42.300455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.711 [2024-07-26 16:40:42.300504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.711 [2024-07-26 16:40:42.300535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.711 [2024-07-26 16:40:42.315467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.711 [2024-07-26 16:40:42.315517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.711 [2024-07-26 16:40:42.315546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.711 [2024-07-26 16:40:42.333192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.711 [2024-07-26 16:40:42.333236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.711 [2024-07-26 16:40:42.333261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.711 [2024-07-26 16:40:42.353139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.711 [2024-07-26 16:40:42.353183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.711 [2024-07-26 16:40:42.353209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.711 [2024-07-26 16:40:42.371146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.711 [2024-07-26 16:40:42.371192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.711 [2024-07-26 16:40:42.371230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.711 [2024-07-26 16:40:42.386667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.711 [2024-07-26 16:40:42.386709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.711 [2024-07-26 16:40:42.386735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.711 [2024-07-26 16:40:42.406493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.711 [2024-07-26 16:40:42.406553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.711 [2024-07-26 16:40:42.406579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.711 [2024-07-26 16:40:42.425794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.711 [2024-07-26 16:40:42.425840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.711 [2024-07-26 16:40:42.425866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.711 [2024-07-26 16:40:42.440770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.711 [2024-07-26 16:40:42.440813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.711 [2024-07-26 16:40:42.440838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.711 [2024-07-26 16:40:42.462957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.711 [2024-07-26 16:40:42.463005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.711 [2024-07-26 16:40:42.463032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.970 [2024-07-26 16:40:42.480054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.970 [2024-07-26 16:40:42.480124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.970 [2024-07-26 16:40:42.480151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.970 [2024-07-26 16:40:42.493528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.970 [2024-07-26 16:40:42.493569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.970 [2024-07-26 16:40:42.493594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.970 [2024-07-26 16:40:42.512859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.970 [2024-07-26 16:40:42.512906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.970 [2024-07-26 16:40:42.512932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.970 [2024-07-26 16:40:42.534891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.970 [2024-07-26 16:40:42.534938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.970 [2024-07-26 16:40:42.534965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.970 [2024-07-26 16:40:42.550395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.970 [2024-07-26 16:40:42.550452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.970 [2024-07-26 16:40:42.550489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.970 [2024-07-26 16:40:42.567155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.970 [2024-07-26 16:40:42.567202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.970 [2024-07-26 16:40:42.567229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.970 [2024-07-26 16:40:42.582222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.970 [2024-07-26 16:40:42.582265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.970 [2024-07-26 16:40:42.582290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.971 [2024-07-26 16:40:42.601158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.971 [2024-07-26 16:40:42.601202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.971 [2024-07-26 16:40:42.601228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.971 [2024-07-26 16:40:42.620364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.971 [2024-07-26 16:40:42.620424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.971 [2024-07-26 16:40:42.620451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.971 [2024-07-26 16:40:42.634718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.971 [2024-07-26 16:40:42.634759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.971 [2024-07-26 16:40:42.634784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.971 [2024-07-26 16:40:42.651107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.971 [2024-07-26 16:40:42.651149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.971 [2024-07-26 16:40:42.651174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.971 [2024-07-26 16:40:42.669504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.971 [2024-07-26 16:40:42.669546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.971 [2024-07-26 16:40:42.669588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.971 [2024-07-26 16:40:42.684351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.971 [2024-07-26 16:40:42.684407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.971 [2024-07-26 16:40:42.684432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.971 [2024-07-26 16:40:42.701906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.971 [2024-07-26 16:40:42.701958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.971 [2024-07-26 16:40:42.701984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.971 [2024-07-26 16:40:42.723905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:22.971 [2024-07-26 16:40:42.723951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.971 [2024-07-26 16:40:42.723976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.230 [2024-07-26 16:40:42.745988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.230 [2024-07-26 16:40:42.746033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.230 [2024-07-26 16:40:42.746065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.230 [2024-07-26 16:40:42.765906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.230 [2024-07-26 16:40:42.765968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.230 [2024-07-26 16:40:42.765993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.230 [2024-07-26 16:40:42.781127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.230 [2024-07-26 16:40:42.781171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.230 [2024-07-26 16:40:42.781198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.230 [2024-07-26 16:40:42.802629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.230 [2024-07-26 16:40:42.802674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.230 [2024-07-26 16:40:42.802699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.230 [2024-07-26 16:40:42.817494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.230 [2024-07-26 16:40:42.817536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.230 [2024-07-26 16:40:42.817577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.230 [2024-07-26 16:40:42.834041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.230 [2024-07-26 16:40:42.834103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.230 [2024-07-26 16:40:42.834130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.230 [2024-07-26 16:40:42.851738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.230 [2024-07-26 16:40:42.851778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.230 [2024-07-26 16:40:42.851815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.230 [2024-07-26 16:40:42.866492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.230 [2024-07-26 16:40:42.866533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.230 [2024-07-26 16:40:42.866558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.230 [2024-07-26 16:40:42.885422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.230 [2024-07-26 16:40:42.885480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.230 [2024-07-26 16:40:42.885506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.230 [2024-07-26 16:40:42.901837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.230 [2024-07-26 16:40:42.901880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.230 [2024-07-26 16:40:42.901906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.230 [2024-07-26 16:40:42.916439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.230 [2024-07-26 16:40:42.916480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.230 [2024-07-26 16:40:42.916504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.230 [2024-07-26 16:40:42.933335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.230 [2024-07-26 16:40:42.933395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.230 [2024-07-26 16:40:42.933422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.230 [2024-07-26 16:40:42.949045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.230 [2024-07-26 16:40:42.949111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.230 [2024-07-26 16:40:42.949137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.230 [2024-07-26 16:40:42.966239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.230 [2024-07-26 16:40:42.966283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.230 [2024-07-26 16:40:42.966309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.230 [2024-07-26 16:40:42.982646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.230 [2024-07-26 16:40:42.982687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.230 [2024-07-26 16:40:42.982712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.489 [2024-07-26 16:40:42.999922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.489 [2024-07-26 16:40:42.999967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.489 [2024-07-26 16:40:42.999993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.489 [2024-07-26 16:40:43.016102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.489 [2024-07-26 16:40:43.016163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.489 [2024-07-26 16:40:43.016190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.489 [2024-07-26 16:40:43.034101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.489 [2024-07-26 16:40:43.034146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.489 [2024-07-26 16:40:43.034172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.489 [2024-07-26 16:40:43.050011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.489 [2024-07-26 16:40:43.050055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.489 [2024-07-26 16:40:43.050110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.489 [2024-07-26 16:40:43.066297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.489 [2024-07-26 16:40:43.066356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.489 [2024-07-26 16:40:43.066384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.489 [2024-07-26 16:40:43.083304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.489 [2024-07-26 16:40:43.083345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.489 [2024-07-26 16:40:43.083386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.489 [2024-07-26 16:40:43.099175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.489 [2024-07-26 16:40:43.099216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.489 [2024-07-26 16:40:43.099240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.489 [2024-07-26 16:40:43.114403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.489 [2024-07-26 16:40:43.114443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.489 [2024-07-26 16:40:43.114468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.489 [2024-07-26 16:40:43.134693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.489 [2024-07-26 16:40:43.134736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.489 [2024-07-26 16:40:43.134773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.489 [2024-07-26 16:40:43.154422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.489 [2024-07-26 16:40:43.154496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.489 [2024-07-26 16:40:43.154538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.489 [2024-07-26 16:40:43.173670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.489 [2024-07-26 16:40:43.173713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.489 [2024-07-26 16:40:43.173754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.489 [2024-07-26 16:40:43.189353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.489 [2024-07-26 16:40:43.189411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.489 [2024-07-26 16:40:43.189436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.489 [2024-07-26 16:40:43.207596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.489 [2024-07-26 16:40:43.207640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.490 [2024-07-26 16:40:43.207666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.490 [2024-07-26 16:40:43.225276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.490 [2024-07-26 16:40:43.225321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.490 [2024-07-26 16:40:43.225348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.490 [2024-07-26 16:40:43.240036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.490 [2024-07-26 16:40:43.240102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.490 [2024-07-26 16:40:43.240128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.748 [2024-07-26 16:40:43.258068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.748 [2024-07-26 16:40:43.258115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.748 [2024-07-26 16:40:43.258141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.748 [2024-07-26 16:40:43.275637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.748 [2024-07-26 16:40:43.275679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.748 [2024-07-26 16:40:43.275704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.748 [2024-07-26 16:40:43.290026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.748 [2024-07-26 16:40:43.290090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.748 [2024-07-26 16:40:43.290117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.748 [2024-07-26 16:40:43.310269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.748 [2024-07-26 16:40:43.310315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.748 [2024-07-26 16:40:43.310341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.748 [2024-07-26 16:40:43.332178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.748 [2024-07-26 16:40:43.332223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.748 [2024-07-26 16:40:43.332248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.748 [2024-07-26 16:40:43.346970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.748 [2024-07-26 16:40:43.347013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.748 [2024-07-26 16:40:43.347055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.748 [2024-07-26 16:40:43.363871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.748 [2024-07-26 16:40:43.363915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.748 [2024-07-26 16:40:43.363956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.748 [2024-07-26 16:40:43.380102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:23.748 [2024-07-26 16:40:43.380150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.748 [2024-07-26 16:40:43.380177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.748 00:35:23.748 Latency(us) 00:35:23.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.748 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:23.748 nvme0n1 : 2.01 13961.51 54.54 0.00 0.00 9157.48 4320.52 33010.73 00:35:23.748 =================================================================================================================== 00:35:23.748 Total : 13961.51 54.54 0.00 0.00 9157.48 4320.52 33010.73 00:35:23.748 0 00:35:23.748 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:23.748 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:23.748 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:23.748 | .driver_specific 00:35:23.748 | .nvme_error 00:35:23.748 | .status_code 00:35:23.748 | .command_transient_transport_error' 00:35:23.748 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:24.007 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 109 > 0 )) 00:35:24.007 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 813714 00:35:24.007 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 813714 ']' 00:35:24.007 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 813714 00:35:24.007 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:24.007 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:24.007 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 813714 00:35:24.007 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:24.007 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:24.007 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 813714' 00:35:24.007 killing process with pid 813714 00:35:24.007 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 813714 00:35:24.007 Received shutdown signal, test time was about 2.000000 seconds 00:35:24.007 00:35:24.007 Latency(us) 00:35:24.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:24.007 =================================================================================================================== 00:35:24.007 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:24.007 16:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 813714 00:35:25.382 16:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:25.382 16:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:25.382 16:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:25.382 16:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:25.382 16:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:25.382 16:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=814265 00:35:25.382 16:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:25.382 16:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 814265 /var/tmp/bperf.sock 00:35:25.382 16:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 814265 ']' 00:35:25.382 16:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:25.382 16:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:25.382 16:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:25.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:25.382 16:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:25.382 16:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:25.382 [2024-07-26 16:40:44.873146] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:25.382 [2024-07-26 16:40:44.873308] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814265 ] 00:35:25.382 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:25.382 Zero copy mechanism will not be used. 00:35:25.382 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.382 [2024-07-26 16:40:45.008911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.641 [2024-07-26 16:40:45.266383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:26.207 16:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:26.207 16:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:26.207 16:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:26.207 16:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:26.465 16:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:26.465 16:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.465 16:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:26.465 16:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.465 16:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:26.465 16:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:27.032 nvme0n1 00:35:27.032 16:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:27.032 16:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.032 16:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:27.032 16:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.032 16:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:27.032 16:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:27.032 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:27.032 Zero copy mechanism will not be used. 00:35:27.032 Running I/O for 2 seconds... 00:35:27.032 [2024-07-26 16:40:46.652760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.032 [2024-07-26 16:40:46.652848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.032 [2024-07-26 16:40:46.652883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.032 [2024-07-26 16:40:46.664785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.032 [2024-07-26 16:40:46.664840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.032 [2024-07-26 16:40:46.664871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.032 [2024-07-26 16:40:46.676707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.032 [2024-07-26 16:40:46.676759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.032 [2024-07-26 16:40:46.676801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.032 [2024-07-26 16:40:46.688397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.032 [2024-07-26 16:40:46.688449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.032 [2024-07-26 16:40:46.688478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.032 [2024-07-26 16:40:46.700036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.032 [2024-07-26 16:40:46.700112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.032 [2024-07-26 16:40:46.700140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.032 [2024-07-26 16:40:46.711668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.032 [2024-07-26 16:40:46.711718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.032 [2024-07-26 16:40:46.711748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.032 [2024-07-26 16:40:46.723296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.032 [2024-07-26 16:40:46.723355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.032 [2024-07-26 16:40:46.723381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.032 [2024-07-26 16:40:46.734849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.032 [2024-07-26 16:40:46.734900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.032 [2024-07-26 16:40:46.734929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.032 [2024-07-26 16:40:46.746476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.032 [2024-07-26 16:40:46.746526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.033 [2024-07-26 16:40:46.746555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.033 [2024-07-26 16:40:46.758167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.033 [2024-07-26 16:40:46.758211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.033 [2024-07-26 16:40:46.758237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.033 [2024-07-26 16:40:46.769716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.033 [2024-07-26 16:40:46.769767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.033 [2024-07-26 16:40:46.769796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.033 [2024-07-26 16:40:46.781195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.033 [2024-07-26 16:40:46.781256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.033 [2024-07-26 16:40:46.781283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.033 [2024-07-26 16:40:46.792863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.033 [2024-07-26 16:40:46.792914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.033 [2024-07-26 16:40:46.792943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.805085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.805144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.805170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.816828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.816879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.816909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.828454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.828503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.828532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.840029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.840086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.840130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.851629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.851679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.851708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.863155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.863198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.863225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.874703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.874752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.874790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.886246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.886291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.886317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.897834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.897883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.897913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.909458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.909507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.909536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.921088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.921161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.921188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.932835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.932892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.932921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.944646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.944704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.944733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.956596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.956652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.956680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.968229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.968282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.968308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.979780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.979836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.979866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:46.991291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:46.991357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:46.991387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:47.002831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:47.002890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:47.002919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:47.014575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:47.014629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:47.014658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:47.026035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:47.026106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:47.026133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:47.037599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:47.037648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:47.037681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.292 [2024-07-26 16:40:47.049247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.292 [2024-07-26 16:40:47.049301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.292 [2024-07-26 16:40:47.049327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.551 [2024-07-26 16:40:47.061238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.551 [2024-07-26 16:40:47.061292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.551 [2024-07-26 16:40:47.061318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.073024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.073090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.073144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.084702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.084757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.084786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.096239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.096291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.096316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.107798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.107854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.107884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.119851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.119901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.119938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.131439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.131504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.131535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.143033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.143113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.143142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.154705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.154761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.154789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.166307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.166378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.166407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.177918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.177979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.178008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.189714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.189776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.189806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.201719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.201780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.201809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.213947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.214003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.214033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.225822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.225878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.225907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.237608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.237666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.237695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.249547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.249597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.249634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.261238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.261280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.261305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.272988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.273046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.273110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.284853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.284901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.284931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.296242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.296285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.296311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.552 [2024-07-26 16:40:47.307997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.552 [2024-07-26 16:40:47.308053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.552 [2024-07-26 16:40:47.308111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.823 [2024-07-26 16:40:47.320167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.823 [2024-07-26 16:40:47.320222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.823 [2024-07-26 16:40:47.320248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.823 [2024-07-26 16:40:47.331904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.823 [2024-07-26 16:40:47.331959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.823 [2024-07-26 16:40:47.331988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.823 [2024-07-26 16:40:47.343695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.823 [2024-07-26 16:40:47.343753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.823 [2024-07-26 16:40:47.343782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.823 [2024-07-26 16:40:47.355247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.823 [2024-07-26 16:40:47.355298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.823 [2024-07-26 16:40:47.355324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.823 [2024-07-26 16:40:47.366827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.823 [2024-07-26 16:40:47.366886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.823 [2024-07-26 16:40:47.366915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.823 [2024-07-26 16:40:47.378581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.823 [2024-07-26 16:40:47.378638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.823 [2024-07-26 16:40:47.378667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.823 [2024-07-26 16:40:47.390314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.390372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.390403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.401899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.401956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.401985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.413617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.413675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.413704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.425330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.425396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.425426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.437144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.437196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.437221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.449340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.449407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.449433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.460839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.460904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.460934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.472708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.472767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.472805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.484796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.484855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.484884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.496458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.496514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.496542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.508018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.508083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.508127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.519677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.519734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.519763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.531208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.531261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.531286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.542765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.542821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.542851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.554498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.554554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.554584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.566130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.566183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.566209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.824 [2024-07-26 16:40:47.577779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:27.824 [2024-07-26 16:40:47.577838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.824 [2024-07-26 16:40:47.577868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.092 [2024-07-26 16:40:47.589616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.092 [2024-07-26 16:40:47.589676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.092 [2024-07-26 16:40:47.589705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.092 [2024-07-26 16:40:47.601142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.092 [2024-07-26 16:40:47.601195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.092 [2024-07-26 16:40:47.601219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.092 [2024-07-26 16:40:47.612823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.092 [2024-07-26 16:40:47.612870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.092 [2024-07-26 16:40:47.612900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.092 [2024-07-26 16:40:47.624534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.092 [2024-07-26 16:40:47.624592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.092 [2024-07-26 16:40:47.624621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.092 [2024-07-26 16:40:47.636235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.092 [2024-07-26 16:40:47.636287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.092 [2024-07-26 16:40:47.636313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.092 [2024-07-26 16:40:47.647875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.092 [2024-07-26 16:40:47.647923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.092 [2024-07-26 16:40:47.647956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.092 [2024-07-26 16:40:47.659489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.092 [2024-07-26 16:40:47.659537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.092 [2024-07-26 16:40:47.659574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.092 [2024-07-26 16:40:47.671149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.092 [2024-07-26 16:40:47.671192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.092 [2024-07-26 16:40:47.671224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.092 [2024-07-26 16:40:47.682903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.092 [2024-07-26 16:40:47.682961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.092 [2024-07-26 16:40:47.682990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.092 [2024-07-26 16:40:47.694572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.092 [2024-07-26 16:40:47.694632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.093 [2024-07-26 16:40:47.694662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.093 [2024-07-26 16:40:47.706214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.093 [2024-07-26 16:40:47.706268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.093 [2024-07-26 16:40:47.706293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.093 [2024-07-26 16:40:47.718035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.093 [2024-07-26 16:40:47.718103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.093 [2024-07-26 16:40:47.718150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.093 [2024-07-26 16:40:47.729831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.093 [2024-07-26 16:40:47.729889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.093 [2024-07-26 16:40:47.729918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.093 [2024-07-26 16:40:47.741596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.093 [2024-07-26 16:40:47.741650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.093 [2024-07-26 16:40:47.741679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.093 [2024-07-26 16:40:47.753089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.093 [2024-07-26 16:40:47.753159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.093 [2024-07-26 16:40:47.753185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.093 [2024-07-26 16:40:47.764840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.093 [2024-07-26 16:40:47.764897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.093 [2024-07-26 16:40:47.764926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.093 [2024-07-26 16:40:47.776592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.093 [2024-07-26 16:40:47.776651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.093 [2024-07-26 16:40:47.776697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.093 [2024-07-26 16:40:47.788357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.093 [2024-07-26 16:40:47.788399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.093 [2024-07-26 16:40:47.788450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.093 [2024-07-26 16:40:47.800082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.093 [2024-07-26 16:40:47.800153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.093 [2024-07-26 16:40:47.800178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.093 [2024-07-26 16:40:47.811845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.093 [2024-07-26 16:40:47.811902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.093 [2024-07-26 16:40:47.811931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.093 [2024-07-26 16:40:47.823681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.093 [2024-07-26 16:40:47.823738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.093 [2024-07-26 16:40:47.823767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.093 [2024-07-26 16:40:47.835530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.093 [2024-07-26 16:40:47.835578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.093 [2024-07-26 16:40:47.835607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.093 [2024-07-26 16:40:47.847036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.093 [2024-07-26 16:40:47.847106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.093 [2024-07-26 16:40:47.847148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:47.859099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:47.859170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:47.859196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:47.870676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:47.870733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:47.870771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:47.882339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:47.882408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:47.882437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:47.894203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:47.894246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:47.894271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:47.905712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:47.905766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:47.905795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:47.917518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:47.917575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:47.917604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:47.929190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:47.929243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:47.929268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:47.940882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:47.940937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:47.940967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:47.952378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:47.952428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:47.952463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:47.964018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:47.964075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:47.964121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:47.975619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:47.975667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:47.975702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:47.987047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:47.987126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:47.987154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:47.998684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:47.998739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:47.998768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:48.010257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:48.010298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:48.010323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:48.021846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:48.021915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:48.021943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:48.033383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:48.033448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:48.033477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:48.045122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:48.045164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:48.045189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:48.056737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:48.056788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:48.056818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:48.068470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:48.068519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:48.068558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.352 [2024-07-26 16:40:48.080116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.352 [2024-07-26 16:40:48.080175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.352 [2024-07-26 16:40:48.080200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.353 [2024-07-26 16:40:48.091823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.353 [2024-07-26 16:40:48.091875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.353 [2024-07-26 16:40:48.091904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.353 [2024-07-26 16:40:48.103565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.353 [2024-07-26 16:40:48.103614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.353 [2024-07-26 16:40:48.103644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.611 [2024-07-26 16:40:48.115502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.611 [2024-07-26 16:40:48.115553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.611 [2024-07-26 16:40:48.115582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.611 [2024-07-26 16:40:48.127215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.611 [2024-07-26 16:40:48.127261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.611 [2024-07-26 16:40:48.127287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.611 [2024-07-26 16:40:48.139002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.611 [2024-07-26 16:40:48.139051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.611 [2024-07-26 16:40:48.139106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.611 [2024-07-26 16:40:48.150585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.611 [2024-07-26 16:40:48.150633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.611 [2024-07-26 16:40:48.150663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.611 [2024-07-26 16:40:48.162068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.611 [2024-07-26 16:40:48.162128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.611 [2024-07-26 16:40:48.162154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.611 [2024-07-26 16:40:48.173696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.611 [2024-07-26 16:40:48.173745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.611 [2024-07-26 16:40:48.173774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.611 [2024-07-26 16:40:48.185295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.611 [2024-07-26 16:40:48.185337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.185378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.196928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.196977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.197005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.208461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.208512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.208542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.220015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.220073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.220119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.231700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.231750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.231778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.243430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.243478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.243508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.255000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.255047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.255085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.266595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.266645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.266683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.278244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.278304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.278330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.289805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.289855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.289884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.301380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.301423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.301469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.313079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.313137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.313162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.324753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.324802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.324831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.336684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.336732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.336761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.348228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.348271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.348298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.359777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.359827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.359857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.612 [2024-07-26 16:40:48.371600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.612 [2024-07-26 16:40:48.371650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.612 [2024-07-26 16:40:48.371680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.871 [2024-07-26 16:40:48.383411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.871 [2024-07-26 16:40:48.383459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.871 [2024-07-26 16:40:48.383488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.871 [2024-07-26 16:40:48.395012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.871 [2024-07-26 16:40:48.395070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.871 [2024-07-26 16:40:48.395116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.871 [2024-07-26 16:40:48.406591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.871 [2024-07-26 16:40:48.406639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.871 [2024-07-26 16:40:48.406667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.871 [2024-07-26 16:40:48.418047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.871 [2024-07-26 16:40:48.418107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.871 [2024-07-26 16:40:48.418151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.871 [2024-07-26 16:40:48.429695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.871 [2024-07-26 16:40:48.429744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.871 [2024-07-26 16:40:48.429774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.871 [2024-07-26 16:40:48.441458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.871 [2024-07-26 16:40:48.441507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.871 [2024-07-26 16:40:48.441538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.871 [2024-07-26 16:40:48.453320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.871 [2024-07-26 16:40:48.453379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.871 [2024-07-26 16:40:48.453410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.871 [2024-07-26 16:40:48.465087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.871 [2024-07-26 16:40:48.465151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.871 [2024-07-26 16:40:48.465186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.871 [2024-07-26 16:40:48.476842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.871 [2024-07-26 16:40:48.476892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.871 [2024-07-26 16:40:48.476921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.871 [2024-07-26 16:40:48.488569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.871 [2024-07-26 16:40:48.488619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.871 [2024-07-26 16:40:48.488649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.872 [2024-07-26 16:40:48.500358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.872 [2024-07-26 16:40:48.500416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.872 [2024-07-26 16:40:48.500446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.872 [2024-07-26 16:40:48.511859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.872 [2024-07-26 16:40:48.511908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.872 [2024-07-26 16:40:48.511936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.872 [2024-07-26 16:40:48.523533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.872 [2024-07-26 16:40:48.523582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.872 [2024-07-26 16:40:48.523612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.872 [2024-07-26 16:40:48.535022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.872 [2024-07-26 16:40:48.535078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.872 [2024-07-26 16:40:48.535122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.872 [2024-07-26 16:40:48.546576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.872 [2024-07-26 16:40:48.546625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.872 [2024-07-26 16:40:48.546654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.872 [2024-07-26 16:40:48.558150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.872 [2024-07-26 16:40:48.558194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.872 [2024-07-26 16:40:48.558219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.872 [2024-07-26 16:40:48.569818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.872 [2024-07-26 16:40:48.569867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.872 [2024-07-26 16:40:48.569897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.872 [2024-07-26 16:40:48.581528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.872 [2024-07-26 16:40:48.581578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.872 [2024-07-26 16:40:48.581608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.872 [2024-07-26 16:40:48.593104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.872 [2024-07-26 16:40:48.593164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.872 [2024-07-26 16:40:48.593190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.872 [2024-07-26 16:40:48.604740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.872 [2024-07-26 16:40:48.604790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.872 [2024-07-26 16:40:48.604819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.872 [2024-07-26 16:40:48.616355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.872 [2024-07-26 16:40:48.616398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.872 [2024-07-26 16:40:48.616442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:28.872 [2024-07-26 16:40:48.627910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:28.872 [2024-07-26 16:40:48.627960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.872 [2024-07-26 16:40:48.627989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.130 [2024-07-26 16:40:48.639902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:29.130 [2024-07-26 16:40:48.639953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.130 [2024-07-26 16:40:48.639982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.130 00:35:29.130 Latency(us) 00:35:29.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.130 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:29.130 nvme0n1 : 2.00 2655.94 331.99 0.00 0.00 6017.21 5461.33 12379.02 00:35:29.130 =================================================================================================================== 00:35:29.130 Total : 2655.94 331.99 0.00 0.00 6017.21 5461.33 12379.02 00:35:29.130 0 00:35:29.130 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:29.130 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:29.130 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:29.130 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:29.130 | .driver_specific 00:35:29.130 | .nvme_error 00:35:29.130 | .status_code 00:35:29.130 | .command_transient_transport_error' 00:35:29.389 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 171 > 0 )) 00:35:29.389 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 814265 00:35:29.389 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 814265 ']' 00:35:29.389 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 814265 00:35:29.389 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:29.389 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:29.389 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 814265 00:35:29.389 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:29.389 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:29.389 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 814265' 00:35:29.389 killing process with pid 814265 00:35:29.389 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 814265 00:35:29.389 Received shutdown signal, test time was about 2.000000 seconds 00:35:29.389 00:35:29.389 Latency(us) 00:35:29.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.389 =================================================================================================================== 00:35:29.389 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:29.389 16:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 814265 00:35:30.351 16:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:30.351 16:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:30.351 16:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:30.351 16:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:30.351 16:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:30.351 16:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=814925 00:35:30.351 16:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 814925 /var/tmp/bperf.sock 00:35:30.351 16:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:30.351 16:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 814925 ']' 00:35:30.351 16:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:30.351 16:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:30.351 16:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:30.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:30.351 16:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:30.351 16:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:30.351 [2024-07-26 16:40:50.096114] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:30.351 [2024-07-26 16:40:50.096262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814925 ] 00:35:30.609 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.609 [2024-07-26 16:40:50.225519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.867 [2024-07-26 16:40:50.485171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:31.431 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:31.431 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:31.431 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:31.431 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:31.688 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:31.688 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.688 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:31.688 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.688 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:31.688 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:31.946 nvme0n1 00:35:31.946 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:31.946 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.946 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:31.946 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.946 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:31.946 16:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:32.204 Running I/O for 2 seconds... 00:35:32.204 [2024-07-26 16:40:51.759341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:35:32.204 [2024-07-26 16:40:51.760805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.204 [2024-07-26 16:40:51.760887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:32.204 [2024-07-26 16:40:51.774995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:35:32.204 [2024-07-26 16:40:51.776398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.204 [2024-07-26 16:40:51.776465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:32.204 [2024-07-26 16:40:51.792854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe720 00:35:32.204 [2024-07-26 16:40:51.794384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.204 [2024-07-26 16:40:51.794451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:32.204 [2024-07-26 16:40:51.809251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fcdd0 00:35:32.204 [2024-07-26 16:40:51.810952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.204 [2024-07-26 16:40:51.810992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:32.204 [2024-07-26 16:40:51.824741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:35:32.204 [2024-07-26 16:40:51.826345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.204 [2024-07-26 16:40:51.826408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:32.204 [2024-07-26 16:40:51.842830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:35:32.204 [2024-07-26 16:40:51.844700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.204 [2024-07-26 16:40:51.844769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:32.205 [2024-07-26 16:40:51.859822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaef0 00:35:32.205 [2024-07-26 16:40:51.861915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.205 [2024-07-26 16:40:51.861964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:32.205 [2024-07-26 16:40:51.875358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0630 00:35:32.205 [2024-07-26 16:40:51.877382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.205 [2024-07-26 16:40:51.877421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:32.205 [2024-07-26 16:40:51.890234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:35:32.205 [2024-07-26 16:40:51.891595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.205 [2024-07-26 16:40:51.891650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:32.205 [2024-07-26 16:40:51.906321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:35:32.205 [2024-07-26 16:40:51.907743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.205 [2024-07-26 16:40:51.907788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:32.205 [2024-07-26 16:40:51.924397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:35:32.205 [2024-07-26 16:40:51.926804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.205 [2024-07-26 16:40:51.926865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:32.205 [2024-07-26 16:40:51.938910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8d30 00:35:32.205 [2024-07-26 16:40:51.940650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.205 [2024-07-26 16:40:51.940705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:32.205 [2024-07-26 16:40:51.953135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed920 00:35:32.205 [2024-07-26 16:40:51.955895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.205 [2024-07-26 16:40:51.955940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:32.463 [2024-07-26 16:40:51.967819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:35:32.463 [2024-07-26 16:40:51.969014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.463 [2024-07-26 16:40:51.969072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:32.463 [2024-07-26 16:40:51.984128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:35:32.463 [2024-07-26 16:40:51.985412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.463 [2024-07-26 16:40:51.985467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:32.463 [2024-07-26 16:40:51.998994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:35:32.463 [2024-07-26 16:40:52.000319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.463 [2024-07-26 16:40:52.000384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:32.463 [2024-07-26 16:40:52.016477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:35:32.463 [2024-07-26 16:40:52.018032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.463 [2024-07-26 16:40:52.018096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:32.463 [2024-07-26 16:40:52.032636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:35:32.463 [2024-07-26 16:40:52.034385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.463 [2024-07-26 16:40:52.034440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:32.463 [2024-07-26 16:40:52.047509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec840 00:35:32.463 [2024-07-26 16:40:52.049266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.463 [2024-07-26 16:40:52.049320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:32.463 [2024-07-26 16:40:52.062012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:35:32.463 [2024-07-26 16:40:52.063178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.463 [2024-07-26 16:40:52.063229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:32.463 [2024-07-26 16:40:52.078150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6020 00:35:32.463 [2024-07-26 16:40:52.079192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.463 [2024-07-26 16:40:52.079242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:32.463 [2024-07-26 16:40:52.096135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:35:32.463 [2024-07-26 16:40:52.098288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.463 [2024-07-26 16:40:52.098352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:32.463 [2024-07-26 16:40:52.110731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed920 00:35:32.463 [2024-07-26 16:40:52.112287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.463 [2024-07-26 16:40:52.112326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:32.463 [2024-07-26 16:40:52.126763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0ea0 00:35:32.463 [2024-07-26 16:40:52.128448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.463 [2024-07-26 16:40:52.128493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:32.463 [2024-07-26 16:40:52.142638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:35:32.463 [2024-07-26 16:40:52.144419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.463 [2024-07-26 16:40:52.144463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:32.463 [2024-07-26 16:40:52.158557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe720 00:35:32.463 [2024-07-26 16:40:52.160361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.464 [2024-07-26 16:40:52.160401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:32.464 [2024-07-26 16:40:52.174297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e95a0 00:35:32.464 [2024-07-26 16:40:52.176048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.464 [2024-07-26 16:40:52.176096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:32.464 [2024-07-26 16:40:52.189964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:35:32.464 [2024-07-26 16:40:52.191738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.464 [2024-07-26 16:40:52.191785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:32.464 [2024-07-26 16:40:52.204491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fcdd0 00:35:32.464 [2024-07-26 16:40:52.206265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.464 [2024-07-26 16:40:52.206320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:32.464 [2024-07-26 16:40:52.219201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:35:32.464 [2024-07-26 16:40:52.220311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.464 [2024-07-26 16:40:52.220379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.235462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:35:32.722 [2024-07-26 16:40:52.236568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.236617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.253534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:35:32.722 [2024-07-26 16:40:52.255693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.255749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.268163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195efae0 00:35:32.722 [2024-07-26 16:40:52.269620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.269660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.284293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee190 00:35:32.722 [2024-07-26 16:40:52.285919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.285974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.302482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:35:32.722 [2024-07-26 16:40:52.305040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.305088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.313781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:35:32.722 [2024-07-26 16:40:52.314919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.314964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.329737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:35:32.722 [2024-07-26 16:40:52.330874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.330922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.345457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:35:32.722 [2024-07-26 16:40:52.346567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.346626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.361803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:35:32.722 [2024-07-26 16:40:52.363177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.363220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.377977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:35:32.722 [2024-07-26 16:40:52.379330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.379379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.394309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9b30 00:35:32.722 [2024-07-26 16:40:52.395842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.395899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.409167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:35:32.722 [2024-07-26 16:40:52.410667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.410705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.426587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed0b0 00:35:32.722 [2024-07-26 16:40:52.428366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.428423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.442773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:35:32.722 [2024-07-26 16:40:52.444703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.444751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.457690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fdeb0 00:35:32.722 [2024-07-26 16:40:52.459654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.459709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:32.722 [2024-07-26 16:40:52.472282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6020 00:35:32.722 [2024-07-26 16:40:52.473621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.722 [2024-07-26 16:40:52.473661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:32.980 [2024-07-26 16:40:52.488316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:35:32.980 [2024-07-26 16:40:52.489629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.980 [2024-07-26 16:40:52.489681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:32.980 [2024-07-26 16:40:52.506251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:35:32.980 [2024-07-26 16:40:52.508566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.980 [2024-07-26 16:40:52.508620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:32.980 [2024-07-26 16:40:52.520878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fef90 00:35:32.980 [2024-07-26 16:40:52.522655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.980 [2024-07-26 16:40:52.522715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:32.980 [2024-07-26 16:40:52.535548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:35:32.980 [2024-07-26 16:40:52.537551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.980 [2024-07-26 16:40:52.537606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:32.980 [2024-07-26 16:40:52.550797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:35:32.980 [2024-07-26 16:40:52.551987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.980 [2024-07-26 16:40:52.552030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:32.980 [2024-07-26 16:40:52.567037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe720 00:35:32.980 [2024-07-26 16:40:52.568247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.980 [2024-07-26 16:40:52.568287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:32.980 [2024-07-26 16:40:52.583009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de470 00:35:32.980 [2024-07-26 16:40:52.584179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.980 [2024-07-26 16:40:52.584223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:32.980 [2024-07-26 16:40:52.598873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaef0 00:35:32.980 [2024-07-26 16:40:52.600065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.980 [2024-07-26 16:40:52.600120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:32.980 [2024-07-26 16:40:52.616701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:35:32.980 [2024-07-26 16:40:52.618618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.980 [2024-07-26 16:40:52.618673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:32.980 [2024-07-26 16:40:52.631267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fef90 00:35:32.980 [2024-07-26 16:40:52.632577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.980 [2024-07-26 16:40:52.632633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:32.980 [2024-07-26 16:40:52.647322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa3a0 00:35:32.980 [2024-07-26 16:40:52.648624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.980 [2024-07-26 16:40:52.648669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:32.980 [2024-07-26 16:40:52.665291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e95a0 00:35:32.980 [2024-07-26 16:40:52.667622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.980 [2024-07-26 16:40:52.667677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:32.980 [2024-07-26 16:40:52.679856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7970 00:35:32.980 [2024-07-26 16:40:52.681578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.980 [2024-07-26 16:40:52.681633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:32.980 [2024-07-26 16:40:52.697475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:35:32.980 [2024-07-26 16:40:52.700009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.980 [2024-07-26 16:40:52.700074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:32.980 [2024-07-26 16:40:52.708605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:35:32.980 [2024-07-26 16:40:52.709708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.981 [2024-07-26 16:40:52.709762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:32.981 [2024-07-26 16:40:52.723447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:35:32.981 [2024-07-26 16:40:52.724552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.981 [2024-07-26 16:40:52.724590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:32.981 [2024-07-26 16:40:52.740857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2510 00:35:33.239 [2024-07-26 16:40:52.742225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.742265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.756600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0350 00:35:33.239 [2024-07-26 16:40:52.757951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.757990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.774258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1ca0 00:35:33.239 [2024-07-26 16:40:52.776409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.776465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.789116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3060 00:35:33.239 [2024-07-26 16:40:52.790600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.790639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.805172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:35:33.239 [2024-07-26 16:40:52.806872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.806918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.823238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:35:33.239 [2024-07-26 16:40:52.825791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.825846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.834374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f96f8 00:35:33.239 [2024-07-26 16:40:52.835443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.835481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.850278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:35:33.239 [2024-07-26 16:40:52.851369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.851412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.865912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:35:33.239 [2024-07-26 16:40:52.867016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.867068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.882001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:35:33.239 [2024-07-26 16:40:52.883195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.883236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.900169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:35:33.239 [2024-07-26 16:40:52.902278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.902334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.914811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:35:33.239 [2024-07-26 16:40:52.916319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.916375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.931200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa3a0 00:35:33.239 [2024-07-26 16:40:52.932762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.932814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.949278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fbcf0 00:35:33.239 [2024-07-26 16:40:52.951811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.951849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.960512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:35:33.239 [2024-07-26 16:40:52.961583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.961639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.975416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:35:33.239 [2024-07-26 16:40:52.976464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.976503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:33.239 [2024-07-26 16:40:52.992865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:35:33.239 [2024-07-26 16:40:52.994178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.239 [2024-07-26 16:40:52.994223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:33.497 [2024-07-26 16:40:53.009458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:35:33.497 [2024-07-26 16:40:53.010968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.497 [2024-07-26 16:40:53.011023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:33.497 [2024-07-26 16:40:53.024344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1ca0 00:35:33.497 [2024-07-26 16:40:53.025850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.497 [2024-07-26 16:40:53.025894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:33.497 [2024-07-26 16:40:53.042056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:35:33.497 [2024-07-26 16:40:53.043820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.497 [2024-07-26 16:40:53.043882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:33.497 [2024-07-26 16:40:53.058539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:35:33.497 [2024-07-26 16:40:53.060476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.497 [2024-07-26 16:40:53.060533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:33.497 [2024-07-26 16:40:53.073650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1430 00:35:33.497 [2024-07-26 16:40:53.075567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.497 [2024-07-26 16:40:53.075625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:33.497 [2024-07-26 16:40:53.088297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:35:33.497 [2024-07-26 16:40:53.089569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.497 [2024-07-26 16:40:53.089608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:33.497 [2024-07-26 16:40:53.104337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:35:33.497 [2024-07-26 16:40:53.105561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.497 [2024-07-26 16:40:53.105624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:33.497 [2024-07-26 16:40:53.122367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:35:33.497 [2024-07-26 16:40:53.124671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.497 [2024-07-26 16:40:53.124709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:33.497 [2024-07-26 16:40:53.136808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:35:33.497 [2024-07-26 16:40:53.138517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.497 [2024-07-26 16:40:53.138579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:33.497 [2024-07-26 16:40:53.151161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eea00 00:35:33.497 [2024-07-26 16:40:53.153919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.498 [2024-07-26 16:40:53.153964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:33.498 [2024-07-26 16:40:53.166041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:35:33.498 [2024-07-26 16:40:53.167134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.498 [2024-07-26 16:40:53.167177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:33.498 [2024-07-26 16:40:53.182394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:35:33.498 [2024-07-26 16:40:53.183716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.498 [2024-07-26 16:40:53.183755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:33.498 [2024-07-26 16:40:53.197233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8d30 00:35:33.498 [2024-07-26 16:40:53.198484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.498 [2024-07-26 16:40:53.198524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:33.498 [2024-07-26 16:40:53.214625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:35:33.498 [2024-07-26 16:40:53.216145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.498 [2024-07-26 16:40:53.216188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:33.498 [2024-07-26 16:40:53.230375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:35:33.498 [2024-07-26 16:40:53.231900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.498 [2024-07-26 16:40:53.231940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:33.498 [2024-07-26 16:40:53.245221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f57b0 00:35:33.498 [2024-07-26 16:40:53.246720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.498 [2024-07-26 16:40:53.246778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.263110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:35:33.755 [2024-07-26 16:40:53.264919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.264976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.280857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9b30 00:35:33.755 [2024-07-26 16:40:53.283298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.283338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.291917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:35:33.755 [2024-07-26 16:40:53.292986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.293041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.306833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb480 00:35:33.755 [2024-07-26 16:40:53.307871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.307909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.324426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e95a0 00:35:33.755 [2024-07-26 16:40:53.325711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.325771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.340622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3060 00:35:33.755 [2024-07-26 16:40:53.342117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.342174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.355576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:35:33.755 [2024-07-26 16:40:53.357054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.357105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.373311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eea00 00:35:33.755 [2024-07-26 16:40:53.375071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.375133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.389757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:35:33.755 [2024-07-26 16:40:53.391675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.391730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.404564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fdeb0 00:35:33.755 [2024-07-26 16:40:53.406426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.406464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.419208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:35:33.755 [2024-07-26 16:40:53.420454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.420494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.435328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:35:33.755 [2024-07-26 16:40:53.436756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.436804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.453477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec840 00:35:33.755 [2024-07-26 16:40:53.455793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.455848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.468155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:35:33.755 [2024-07-26 16:40:53.469897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.469952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.482432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5220 00:35:33.755 [2024-07-26 16:40:53.484361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.484414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.497406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:35:33.755 [2024-07-26 16:40:53.498542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.498598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:33.755 [2024-07-26 16:40:53.513637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:35:33.755 [2024-07-26 16:40:53.514966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:33.755 [2024-07-26 16:40:53.515011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:34.012 [2024-07-26 16:40:53.528854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:35:34.012 [2024-07-26 16:40:53.530149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.012 [2024-07-26 16:40:53.530194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:34.013 [2024-07-26 16:40:53.546795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:35:34.013 [2024-07-26 16:40:53.548349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.013 [2024-07-26 16:40:53.548411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:34.013 [2024-07-26 16:40:53.563399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1f80 00:35:34.013 [2024-07-26 16:40:53.565123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.013 [2024-07-26 16:40:53.565181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:34.013 [2024-07-26 16:40:53.578449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:35:34.013 [2024-07-26 16:40:53.580173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.013 [2024-07-26 16:40:53.580214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:34.013 [2024-07-26 16:40:53.593142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df550 00:35:34.013 [2024-07-26 16:40:53.594214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.013 [2024-07-26 16:40:53.594271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:34.013 [2024-07-26 16:40:53.609372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9b30 00:35:34.013 [2024-07-26 16:40:53.610389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.013 [2024-07-26 16:40:53.610454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:34.013 [2024-07-26 16:40:53.627415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fbcf0 00:35:34.013 [2024-07-26 16:40:53.629498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.013 [2024-07-26 16:40:53.629554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:34.013 [2024-07-26 16:40:53.641960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:35:34.013 [2024-07-26 16:40:53.643441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.013 [2024-07-26 16:40:53.643496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:34.013 [2024-07-26 16:40:53.658015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd640 00:35:34.013 [2024-07-26 16:40:53.659683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.013 [2024-07-26 16:40:53.659733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:34.013 [2024-07-26 16:40:53.676057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:35:34.013 [2024-07-26 16:40:53.678586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.013 [2024-07-26 16:40:53.678626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:34.013 [2024-07-26 16:40:53.687103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:35:34.013 [2024-07-26 16:40:53.688149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.013 [2024-07-26 16:40:53.688205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:34.013 [2024-07-26 16:40:53.701968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:35:34.013 [2024-07-26 16:40:53.703011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.013 [2024-07-26 16:40:53.703052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:34.013 [2024-07-26 16:40:53.719392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0ea0 00:35:34.013 [2024-07-26 16:40:53.720653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.013 [2024-07-26 16:40:53.720713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:34.013 [2024-07-26 16:40:53.735478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4140 00:35:34.013 [2024-07-26 16:40:53.736954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.013 [2024-07-26 16:40:53.737009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:34.013 00:35:34.013 Latency(us) 00:35:34.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:34.013 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:34.013 nvme0n1 : 2.01 16098.68 62.89 0.00 0.00 7934.66 3616.62 19612.25 00:35:34.013 =================================================================================================================== 00:35:34.013 Total : 16098.68 62.89 0.00 0.00 7934.66 3616.62 19612.25 00:35:34.013 0 00:35:34.013 16:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:34.013 16:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:34.013 16:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:34.013 16:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:34.013 | .driver_specific 00:35:34.013 | .nvme_error 00:35:34.013 | .status_code 00:35:34.013 | .command_transient_transport_error' 00:35:34.270 16:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 126 > 0 )) 00:35:34.270 16:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 814925 00:35:34.270 16:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 814925 ']' 00:35:34.270 16:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 814925 00:35:34.270 16:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:34.270 16:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:34.270 16:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 814925 00:35:34.528 16:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:34.528 16:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:34.528 16:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 814925' 00:35:34.528 killing process with pid 814925 00:35:34.528 16:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 814925 00:35:34.528 Received shutdown signal, test time was about 2.000000 seconds 00:35:34.528 00:35:34.528 Latency(us) 00:35:34.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:34.528 =================================================================================================================== 00:35:34.528 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:34.528 16:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 814925 00:35:35.462 16:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:35.462 16:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:35.462 16:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:35.462 16:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:35.462 16:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:35.462 16:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=815469 00:35:35.462 16:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:35.462 16:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 815469 /var/tmp/bperf.sock 00:35:35.462 16:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 815469 ']' 00:35:35.462 16:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:35.462 16:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:35.462 16:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:35.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:35.462 16:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:35.462 16:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:35.462 [2024-07-26 16:40:55.082886] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:35.462 [2024-07-26 16:40:55.083034] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815469 ] 00:35:35.462 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:35.462 Zero copy mechanism will not be used. 00:35:35.462 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.462 [2024-07-26 16:40:55.217216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.721 [2024-07-26 16:40:55.476276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.288 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:36.288 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:36.288 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:36.288 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:36.546 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:36.546 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.546 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:36.546 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.546 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:36.546 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:37.112 nvme0n1 00:35:37.112 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:37.112 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.112 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:37.112 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.112 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:37.112 16:40:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:37.112 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:37.112 Zero copy mechanism will not be used. 00:35:37.112 Running I/O for 2 seconds... 00:35:37.112 [2024-07-26 16:40:56.836818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.113 [2024-07-26 16:40:56.837271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.113 [2024-07-26 16:40:56.837320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:37.113 [2024-07-26 16:40:56.855286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.113 [2024-07-26 16:40:56.855706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.113 [2024-07-26 16:40:56.855746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:37.113 [2024-07-26 16:40:56.872681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.113 [2024-07-26 16:40:56.873195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.113 [2024-07-26 16:40:56.873250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:37.373 [2024-07-26 16:40:56.888475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.373 [2024-07-26 16:40:56.888805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.373 [2024-07-26 16:40:56.888841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:37.373 [2024-07-26 16:40:56.903868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.373 [2024-07-26 16:40:56.904327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.373 [2024-07-26 16:40:56.904401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:37.373 [2024-07-26 16:40:56.919629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.373 [2024-07-26 16:40:56.920079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.373 [2024-07-26 16:40:56.920119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:37.373 [2024-07-26 16:40:56.937015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.373 [2024-07-26 16:40:56.937447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.373 [2024-07-26 16:40:56.937496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:37.373 [2024-07-26 16:40:56.954843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.373 [2024-07-26 16:40:56.955273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.373 [2024-07-26 16:40:56.955312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:37.373 [2024-07-26 16:40:56.970617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.373 [2024-07-26 16:40:56.971047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.373 [2024-07-26 16:40:56.971116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:37.373 [2024-07-26 16:40:56.986764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.373 [2024-07-26 16:40:56.987210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.373 [2024-07-26 16:40:56.987264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:37.373 [2024-07-26 16:40:57.002017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.373 [2024-07-26 16:40:57.002440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.373 [2024-07-26 16:40:57.002478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:37.373 [2024-07-26 16:40:57.017430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.373 [2024-07-26 16:40:57.017855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.373 [2024-07-26 16:40:57.017892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:37.373 [2024-07-26 16:40:57.033902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.373 [2024-07-26 16:40:57.034319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.373 [2024-07-26 16:40:57.034382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:37.373 [2024-07-26 16:40:57.051274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.373 [2024-07-26 16:40:57.051707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.373 [2024-07-26 16:40:57.051743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:37.373 [2024-07-26 16:40:57.065835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.373 [2024-07-26 16:40:57.066263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.373 [2024-07-26 16:40:57.066301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:37.373 [2024-07-26 16:40:57.081895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.373 [2024-07-26 16:40:57.082373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.373 [2024-07-26 16:40:57.082411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:37.373 [2024-07-26 16:40:57.097674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.373 [2024-07-26 16:40:57.098130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.373 [2024-07-26 16:40:57.098169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:37.374 [2024-07-26 16:40:57.116362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.374 [2024-07-26 16:40:57.116806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.374 [2024-07-26 16:40:57.116846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:37.632 [2024-07-26 16:40:57.136328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.632 [2024-07-26 16:40:57.136745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.632 [2024-07-26 16:40:57.136782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:37.632 [2024-07-26 16:40:57.153314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.632 [2024-07-26 16:40:57.153722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.632 [2024-07-26 16:40:57.153759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:37.632 [2024-07-26 16:40:57.169913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.632 [2024-07-26 16:40:57.170261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.632 [2024-07-26 16:40:57.170299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:37.632 [2024-07-26 16:40:57.186184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.632 [2024-07-26 16:40:57.186622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.632 [2024-07-26 16:40:57.186667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:37.632 [2024-07-26 16:40:57.201940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.632 [2024-07-26 16:40:57.202392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.632 [2024-07-26 16:40:57.202429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:37.632 [2024-07-26 16:40:57.218096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.632 [2024-07-26 16:40:57.218534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.632 [2024-07-26 16:40:57.218572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:37.632 [2024-07-26 16:40:57.235015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.632 [2024-07-26 16:40:57.235491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.632 [2024-07-26 16:40:57.235529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:37.632 [2024-07-26 16:40:57.255097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.632 [2024-07-26 16:40:57.255525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.632 [2024-07-26 16:40:57.255563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:37.632 [2024-07-26 16:40:57.272423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.632 [2024-07-26 16:40:57.272623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.632 [2024-07-26 16:40:57.272660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:37.632 [2024-07-26 16:40:57.290146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.632 [2024-07-26 16:40:57.290557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.632 [2024-07-26 16:40:57.290593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:37.632 [2024-07-26 16:40:57.306752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.632 [2024-07-26 16:40:57.307170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.632 [2024-07-26 16:40:57.307208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:37.632 [2024-07-26 16:40:57.326037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.633 [2024-07-26 16:40:57.326465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.633 [2024-07-26 16:40:57.326504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:37.633 [2024-07-26 16:40:57.342972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.633 [2024-07-26 16:40:57.343426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.633 [2024-07-26 16:40:57.343466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:37.633 [2024-07-26 16:40:57.358953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.633 [2024-07-26 16:40:57.359412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.633 [2024-07-26 16:40:57.359453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:37.633 [2024-07-26 16:40:57.374851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.633 [2024-07-26 16:40:57.375259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.633 [2024-07-26 16:40:57.375299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.394635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.395095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.395136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.409594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.409995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.410034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.424160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.424590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.424628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.438326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.438721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.438758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.454535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.454933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.454970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.469500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.469914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.469961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.484031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.484542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.484581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.499024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.499455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.499492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.514488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.514913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.514949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.530011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.530432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.530470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.545908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.546366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.546404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.562022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.562433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.562472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.578696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.579196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.579240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.595852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.596297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.596339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.612481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.612914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.612954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.630463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.630870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.630909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:37.891 [2024-07-26 16:40:57.646168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:37.891 [2024-07-26 16:40:57.646569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.891 [2024-07-26 16:40:57.646617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.149 [2024-07-26 16:40:57.662728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.149 [2024-07-26 16:40:57.663160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.149 [2024-07-26 16:40:57.663201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.149 [2024-07-26 16:40:57.678006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.149 [2024-07-26 16:40:57.678413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.149 [2024-07-26 16:40:57.678452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.150 [2024-07-26 16:40:57.694310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.150 [2024-07-26 16:40:57.694714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.150 [2024-07-26 16:40:57.694753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.150 [2024-07-26 16:40:57.709735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.150 [2024-07-26 16:40:57.710130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.150 [2024-07-26 16:40:57.710168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.150 [2024-07-26 16:40:57.724605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.150 [2024-07-26 16:40:57.724988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.150 [2024-07-26 16:40:57.725028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.150 [2024-07-26 16:40:57.739371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.150 [2024-07-26 16:40:57.739769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.150 [2024-07-26 16:40:57.739816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.150 [2024-07-26 16:40:57.754384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.150 [2024-07-26 16:40:57.754784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.150 [2024-07-26 16:40:57.754823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.150 [2024-07-26 16:40:57.769698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.150 [2024-07-26 16:40:57.770126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.150 [2024-07-26 16:40:57.770166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.150 [2024-07-26 16:40:57.786202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.150 [2024-07-26 16:40:57.786536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.150 [2024-07-26 16:40:57.786576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.150 [2024-07-26 16:40:57.803681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.150 [2024-07-26 16:40:57.804093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.150 [2024-07-26 16:40:57.804132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.150 [2024-07-26 16:40:57.820532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.150 [2024-07-26 16:40:57.820950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.150 [2024-07-26 16:40:57.820988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.150 [2024-07-26 16:40:57.837295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.150 [2024-07-26 16:40:57.837716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.150 [2024-07-26 16:40:57.837751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.150 [2024-07-26 16:40:57.851871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.150 [2024-07-26 16:40:57.852319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.150 [2024-07-26 16:40:57.852359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.150 [2024-07-26 16:40:57.867149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.150 [2024-07-26 16:40:57.867630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.150 [2024-07-26 16:40:57.867669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.150 [2024-07-26 16:40:57.882790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.150 [2024-07-26 16:40:57.883238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.150 [2024-07-26 16:40:57.883278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.150 [2024-07-26 16:40:57.899329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.150 [2024-07-26 16:40:57.899753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.150 [2024-07-26 16:40:57.899792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.408 [2024-07-26 16:40:57.916028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.408 [2024-07-26 16:40:57.916455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.408 [2024-07-26 16:40:57.916495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.408 [2024-07-26 16:40:57.932940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.408 [2024-07-26 16:40:57.933388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.408 [2024-07-26 16:40:57.933443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.408 [2024-07-26 16:40:57.949258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.408 [2024-07-26 16:40:57.949690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.408 [2024-07-26 16:40:57.949728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.409 [2024-07-26 16:40:57.964295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.409 [2024-07-26 16:40:57.964692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.409 [2024-07-26 16:40:57.964728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.409 [2024-07-26 16:40:57.981017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.409 [2024-07-26 16:40:57.981496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.409 [2024-07-26 16:40:57.981533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.409 [2024-07-26 16:40:57.998249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.409 [2024-07-26 16:40:57.998664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.409 [2024-07-26 16:40:57.998701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.409 [2024-07-26 16:40:58.014832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.409 [2024-07-26 16:40:58.015275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.409 [2024-07-26 16:40:58.015314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.409 [2024-07-26 16:40:58.030220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.409 [2024-07-26 16:40:58.030705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.409 [2024-07-26 16:40:58.030744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.409 [2024-07-26 16:40:58.045288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.409 [2024-07-26 16:40:58.045713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.409 [2024-07-26 16:40:58.045750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.409 [2024-07-26 16:40:58.061737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.409 [2024-07-26 16:40:58.062131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.409 [2024-07-26 16:40:58.062168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.409 [2024-07-26 16:40:58.076453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.409 [2024-07-26 16:40:58.076843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.409 [2024-07-26 16:40:58.076880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.409 [2024-07-26 16:40:58.093049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.409 [2024-07-26 16:40:58.093533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.409 [2024-07-26 16:40:58.093571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.409 [2024-07-26 16:40:58.109865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.409 [2024-07-26 16:40:58.110303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.409 [2024-07-26 16:40:58.110342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.409 [2024-07-26 16:40:58.130612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.409 [2024-07-26 16:40:58.131028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.409 [2024-07-26 16:40:58.131074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.409 [2024-07-26 16:40:58.147632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.409 [2024-07-26 16:40:58.148022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.409 [2024-07-26 16:40:58.148081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.409 [2024-07-26 16:40:58.164874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.409 [2024-07-26 16:40:58.165300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.409 [2024-07-26 16:40:58.165338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.667 [2024-07-26 16:40:58.181709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.667 [2024-07-26 16:40:58.182137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.667 [2024-07-26 16:40:58.182176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.667 [2024-07-26 16:40:58.198745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.667 [2024-07-26 16:40:58.199159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.667 [2024-07-26 16:40:58.199199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.667 [2024-07-26 16:40:58.217696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.667 [2024-07-26 16:40:58.218160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.667 [2024-07-26 16:40:58.218200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.667 [2024-07-26 16:40:58.238052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.667 [2024-07-26 16:40:58.238508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.667 [2024-07-26 16:40:58.238546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.667 [2024-07-26 16:40:58.254960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.667 [2024-07-26 16:40:58.255394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.667 [2024-07-26 16:40:58.255434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.667 [2024-07-26 16:40:58.270744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.667 [2024-07-26 16:40:58.271159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.667 [2024-07-26 16:40:58.271197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.667 [2024-07-26 16:40:58.287199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.667 [2024-07-26 16:40:58.287597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.667 [2024-07-26 16:40:58.287635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.667 [2024-07-26 16:40:58.302895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.667 [2024-07-26 16:40:58.303321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.668 [2024-07-26 16:40:58.303374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.668 [2024-07-26 16:40:58.320011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.668 [2024-07-26 16:40:58.320466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.668 [2024-07-26 16:40:58.320506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.668 [2024-07-26 16:40:58.335264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.668 [2024-07-26 16:40:58.335668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.668 [2024-07-26 16:40:58.335706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.668 [2024-07-26 16:40:58.350941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.668 [2024-07-26 16:40:58.351343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.668 [2024-07-26 16:40:58.351381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.668 [2024-07-26 16:40:58.367404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.668 [2024-07-26 16:40:58.367804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.668 [2024-07-26 16:40:58.367841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.668 [2024-07-26 16:40:58.383037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.668 [2024-07-26 16:40:58.383470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.668 [2024-07-26 16:40:58.383509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.668 [2024-07-26 16:40:58.399094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.668 [2024-07-26 16:40:58.399518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.668 [2024-07-26 16:40:58.399556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.668 [2024-07-26 16:40:58.417255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.668 [2024-07-26 16:40:58.417663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.668 [2024-07-26 16:40:58.417701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.434885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.435319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.435358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.453734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.454183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.454232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.470263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.470679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.470716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.485619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.486004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.486055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.501708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.502150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.502187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.519916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.520354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.520392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.535589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.535986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.536023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.553236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.553638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.553676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.568797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.569211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.569249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.587357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.587767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.587805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.605429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.605821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.605860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.620756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.621200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.621240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.635818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.636272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.636314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.652088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.652536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.652574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.668644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.669054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.669101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.926 [2024-07-26 16:40:58.683557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:38.926 [2024-07-26 16:40:58.684022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.926 [2024-07-26 16:40:58.684070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:39.185 [2024-07-26 16:40:58.701122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:39.185 [2024-07-26 16:40:58.701561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.185 [2024-07-26 16:40:58.701598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:39.185 [2024-07-26 16:40:58.717142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:39.185 [2024-07-26 16:40:58.717546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.185 [2024-07-26 16:40:58.717584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:39.185 [2024-07-26 16:40:58.733689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:39.185 [2024-07-26 16:40:58.734125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.185 [2024-07-26 16:40:58.734172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:39.185 [2024-07-26 16:40:58.749405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:39.185 [2024-07-26 16:40:58.749820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.185 [2024-07-26 16:40:58.749858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:39.185 [2024-07-26 16:40:58.765480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:39.185 [2024-07-26 16:40:58.765934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.185 [2024-07-26 16:40:58.765972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:39.185 [2024-07-26 16:40:58.780510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:39.185 [2024-07-26 16:40:58.780908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.185 [2024-07-26 16:40:58.780945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:39.185 [2024-07-26 16:40:58.797680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:39.185 [2024-07-26 16:40:58.798102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.185 [2024-07-26 16:40:58.798139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:39.185 [2024-07-26 16:40:58.816074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:39.185 [2024-07-26 16:40:58.816461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.185 [2024-07-26 16:40:58.816497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:39.185 00:35:39.185 Latency(us) 00:35:39.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.185 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:39.185 nvme0n1 : 2.01 1871.21 233.90 0.00 0.00 8521.91 6553.60 20486.07 00:35:39.185 =================================================================================================================== 00:35:39.185 Total : 1871.21 233.90 0.00 0.00 8521.91 6553.60 20486.07 00:35:39.185 0 00:35:39.185 16:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:39.185 16:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:39.185 16:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:39.185 16:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:39.185 | .driver_specific 00:35:39.185 | .nvme_error 00:35:39.185 | .status_code 00:35:39.185 | .command_transient_transport_error' 00:35:39.443 16:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 121 > 0 )) 00:35:39.443 16:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 815469 00:35:39.443 16:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 815469 ']' 00:35:39.444 16:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 815469 00:35:39.444 16:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:39.444 16:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:39.444 16:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 815469 00:35:39.444 16:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:39.444 16:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:39.444 16:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 815469' 00:35:39.444 killing process with pid 815469 00:35:39.444 16:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 815469 00:35:39.444 Received shutdown signal, test time was about 2.000000 seconds 00:35:39.444 00:35:39.444 Latency(us) 00:35:39.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.444 =================================================================================================================== 00:35:39.444 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:39.444 16:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 815469 00:35:40.817 16:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 813444 00:35:40.817 16:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 813444 ']' 00:35:40.817 16:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 813444 00:35:40.817 16:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:40.817 16:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:40.817 16:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 813444 00:35:40.817 16:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:40.817 16:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:40.817 16:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 813444' 00:35:40.817 killing process with pid 813444 00:35:40.817 16:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 813444 00:35:40.817 16:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 813444 00:35:41.752 00:35:41.752 real 0m23.393s 00:35:41.752 user 0m45.363s 00:35:41.752 sys 0m4.633s 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:41.752 ************************************ 00:35:41.752 END TEST nvmf_digest_error 00:35:41.752 ************************************ 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:41.752 rmmod nvme_tcp 00:35:41.752 rmmod nvme_fabrics 00:35:41.752 rmmod nvme_keyring 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 813444 ']' 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 813444 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 813444 ']' 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 813444 00:35:41.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (813444) - No such process 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 813444 is not found' 00:35:41.752 Process with pid 813444 is not found 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:41.752 16:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:44.286 00:35:44.286 real 0m52.469s 00:35:44.286 user 1m33.297s 00:35:44.286 sys 0m10.740s 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:44.286 ************************************ 00:35:44.286 END TEST nvmf_digest 00:35:44.286 ************************************ 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.286 ************************************ 00:35:44.286 START TEST nvmf_bdevperf 00:35:44.286 ************************************ 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:44.286 * Looking for test storage... 00:35:44.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:44.286 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:35:44.287 16:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:46.187 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:46.187 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:46.187 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:46.187 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:46.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:46.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:35:46.187 00:35:46.187 --- 10.0.0.2 ping statistics --- 00:35:46.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.187 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:46.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:46.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:35:46.187 00:35:46.187 --- 10.0.0.1 ping statistics --- 00:35:46.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.187 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:46.187 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=818196 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 818196 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 818196 ']' 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:46.188 16:41:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.188 [2024-07-26 16:41:05.771987] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:46.188 [2024-07-26 16:41:05.772150] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:46.188 EAL: No free 2048 kB hugepages reported on node 1 00:35:46.188 [2024-07-26 16:41:05.902070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:46.446 [2024-07-26 16:41:06.126796] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:46.446 [2024-07-26 16:41:06.126861] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:46.446 [2024-07-26 16:41:06.126895] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:46.446 [2024-07-26 16:41:06.126913] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:46.446 [2024-07-26 16:41:06.126930] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:46.446 [2024-07-26 16:41:06.127032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:46.446 [2024-07-26 16:41:06.127165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.446 [2024-07-26 16:41:06.127172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:47.011 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:47.011 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:47.011 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:47.011 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:47.011 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.011 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:47.011 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:47.011 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.011 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.011 [2024-07-26 16:41:06.698742] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:47.011 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.011 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:47.011 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.011 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.273 Malloc0 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.273 [2024-07-26 16:41:06.808898] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:47.273 { 00:35:47.273 "params": { 00:35:47.273 "name": "Nvme$subsystem", 00:35:47.273 "trtype": "$TEST_TRANSPORT", 00:35:47.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:47.273 "adrfam": "ipv4", 00:35:47.273 "trsvcid": "$NVMF_PORT", 00:35:47.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:47.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:47.273 "hdgst": ${hdgst:-false}, 00:35:47.273 "ddgst": ${ddgst:-false} 00:35:47.273 }, 00:35:47.273 "method": "bdev_nvme_attach_controller" 00:35:47.273 } 00:35:47.273 EOF 00:35:47.273 )") 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:47.273 16:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:47.273 "params": { 00:35:47.273 "name": "Nvme1", 00:35:47.273 "trtype": "tcp", 00:35:47.273 "traddr": "10.0.0.2", 00:35:47.273 "adrfam": "ipv4", 00:35:47.273 "trsvcid": "4420", 00:35:47.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:47.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:47.273 "hdgst": false, 00:35:47.273 "ddgst": false 00:35:47.273 }, 00:35:47.273 "method": "bdev_nvme_attach_controller" 00:35:47.273 }' 00:35:47.273 [2024-07-26 16:41:06.892209] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:47.273 [2024-07-26 16:41:06.892350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818350 ] 00:35:47.273 EAL: No free 2048 kB hugepages reported on node 1 00:35:47.273 [2024-07-26 16:41:07.012852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.551 [2024-07-26 16:41:07.254457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.124 Running I/O for 1 seconds... 00:35:49.057 00:35:49.057 Latency(us) 00:35:49.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.057 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:49.057 Verification LBA range: start 0x0 length 0x4000 00:35:49.057 Nvme1n1 : 1.00 6204.50 24.24 0.00 0.00 20537.72 1601.99 19709.35 00:35:49.057 =================================================================================================================== 00:35:49.057 Total : 6204.50 24.24 0.00 0.00 20537.72 1601.99 19709.35 00:35:49.992 16:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=818627 00:35:49.992 16:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:49.992 16:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:49.992 16:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:49.992 16:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:49.992 16:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:49.992 16:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:49.992 16:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:49.992 { 00:35:49.992 "params": { 00:35:49.992 "name": "Nvme$subsystem", 00:35:49.992 "trtype": "$TEST_TRANSPORT", 00:35:49.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:49.992 "adrfam": "ipv4", 00:35:49.992 "trsvcid": "$NVMF_PORT", 00:35:49.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:49.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:49.992 "hdgst": ${hdgst:-false}, 00:35:49.992 "ddgst": ${ddgst:-false} 00:35:49.992 }, 00:35:49.992 "method": "bdev_nvme_attach_controller" 00:35:49.992 } 00:35:49.992 EOF 00:35:49.992 )") 00:35:49.993 16:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:49.993 16:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:49.993 16:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:49.993 16:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:49.993 "params": { 00:35:49.993 "name": "Nvme1", 00:35:49.993 "trtype": "tcp", 00:35:49.993 "traddr": "10.0.0.2", 00:35:49.993 "adrfam": "ipv4", 00:35:49.993 "trsvcid": "4420", 00:35:49.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:49.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:49.993 "hdgst": false, 00:35:49.993 "ddgst": false 00:35:49.993 }, 00:35:49.993 "method": "bdev_nvme_attach_controller" 00:35:49.993 }' 00:35:50.251 [2024-07-26 16:41:09.796807] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:50.251 [2024-07-26 16:41:09.796958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818627 ] 00:35:50.251 EAL: No free 2048 kB hugepages reported on node 1 00:35:50.251 [2024-07-26 16:41:09.921259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:50.510 [2024-07-26 16:41:10.166240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.076 Running I/O for 15 seconds... 00:35:52.975 16:41:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 818196 00:35:52.975 16:41:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:53.236 [2024-07-26 16:41:12.744949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.745039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.745151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.745214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.236 [2024-07-26 16:41:12.745269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.236 [2024-07-26 16:41:12.745328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.236 [2024-07-26 16:41:12.745383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.236 [2024-07-26 16:41:12.745438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.236 [2024-07-26 16:41:12.745490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.745540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.745592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.745644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.745697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.745768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.745829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.745881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.745933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.745960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.745985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.746013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.746038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.746072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.746100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.746127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.746151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.746179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.746203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.746230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.746255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.746282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.746307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.746335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.236 [2024-07-26 16:41:12.746361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.236 [2024-07-26 16:41:12.746388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.237 [2024-07-26 16:41:12.746412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.746439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.746469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.746497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.746522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.746549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.746574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.746602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.746627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.746654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.746679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.746707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.746732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.746759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.746784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.746811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.746836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.746863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.746888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.746915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.746940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.746967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.746992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.747962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.747986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.748014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.748039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.748075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.748103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.748131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.748155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.748183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.748208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.748236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.748261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.748288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.748313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.748341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.748366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.748393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.748420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.748448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.748473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.237 [2024-07-26 16:41:12.748506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.237 [2024-07-26 16:41:12.748531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.748558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.748584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.748612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.748637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.748665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.748689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.748718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.748743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.748771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.748796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.748824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.748849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.748879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.748904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.748932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.748957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.748985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.749967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.749992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.750020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.750044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.750080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.750106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.750135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.750159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.750187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.750212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.750240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.750265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.750291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.750316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.750342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.750367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.750393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.750418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.750445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.750469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.750496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.750525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.750553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.750578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.238 [2024-07-26 16:41:12.750605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.238 [2024-07-26 16:41:12.750630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.750657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.750682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.750708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.750733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.750759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.750783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.750810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.750834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.750860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.750884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.750911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.750935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.750962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.750986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.239 [2024-07-26 16:41:12.751886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.751912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:35:53.239 [2024-07-26 16:41:12.751943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:53.239 [2024-07-26 16:41:12.751964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:53.239 [2024-07-26 16:41:12.751985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106304 len:8 PRP1 0x0 PRP2 0x0 00:35:53.239 [2024-07-26 16:41:12.752008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.752333] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:35:53.239 [2024-07-26 16:41:12.752485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.239 [2024-07-26 16:41:12.752518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.752545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.239 [2024-07-26 16:41:12.752568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.752592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.239 [2024-07-26 16:41:12.752616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.752640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.239 [2024-07-26 16:41:12.752662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.239 [2024-07-26 16:41:12.752684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.239 [2024-07-26 16:41:12.757162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.239 [2024-07-26 16:41:12.757221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.239 [2024-07-26 16:41:12.758166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.239 [2024-07-26 16:41:12.758209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.239 [2024-07-26 16:41:12.758241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.239 [2024-07-26 16:41:12.758546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.239 [2024-07-26 16:41:12.758844] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.239 [2024-07-26 16:41:12.758878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.239 [2024-07-26 16:41:12.758907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.239 [2024-07-26 16:41:12.763156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.239 [2024-07-26 16:41:12.772075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.239 [2024-07-26 16:41:12.772614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.239 [2024-07-26 16:41:12.772657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.239 [2024-07-26 16:41:12.772683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.239 [2024-07-26 16:41:12.772975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.239 [2024-07-26 16:41:12.773280] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.239 [2024-07-26 16:41:12.773313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.239 [2024-07-26 16:41:12.773335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.239 [2024-07-26 16:41:12.777544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.240 [2024-07-26 16:41:12.786725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.240 [2024-07-26 16:41:12.787247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.240 [2024-07-26 16:41:12.787290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.240 [2024-07-26 16:41:12.787316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.240 [2024-07-26 16:41:12.787610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.240 [2024-07-26 16:41:12.787904] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.240 [2024-07-26 16:41:12.787936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.240 [2024-07-26 16:41:12.787958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.240 [2024-07-26 16:41:12.792184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.240 [2024-07-26 16:41:12.801229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.240 [2024-07-26 16:41:12.801760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.240 [2024-07-26 16:41:12.801802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.240 [2024-07-26 16:41:12.801828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.240 [2024-07-26 16:41:12.802133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.240 [2024-07-26 16:41:12.802426] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.240 [2024-07-26 16:41:12.802458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.240 [2024-07-26 16:41:12.802480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.240 [2024-07-26 16:41:12.806656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.240 [2024-07-26 16:41:12.815723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.240 [2024-07-26 16:41:12.816211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.240 [2024-07-26 16:41:12.816247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.240 [2024-07-26 16:41:12.816274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.240 [2024-07-26 16:41:12.816565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.240 [2024-07-26 16:41:12.816854] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.240 [2024-07-26 16:41:12.816886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.240 [2024-07-26 16:41:12.816908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.240 [2024-07-26 16:41:12.821051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.240 [2024-07-26 16:41:12.830321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.240 [2024-07-26 16:41:12.830841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.240 [2024-07-26 16:41:12.830882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.240 [2024-07-26 16:41:12.830908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.240 [2024-07-26 16:41:12.831208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.240 [2024-07-26 16:41:12.831500] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.240 [2024-07-26 16:41:12.831532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.240 [2024-07-26 16:41:12.831555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.240 [2024-07-26 16:41:12.835702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.240 [2024-07-26 16:41:12.844959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.240 [2024-07-26 16:41:12.845460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.240 [2024-07-26 16:41:12.845501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.240 [2024-07-26 16:41:12.845528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.240 [2024-07-26 16:41:12.845817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.240 [2024-07-26 16:41:12.846121] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.240 [2024-07-26 16:41:12.846153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.240 [2024-07-26 16:41:12.846176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.240 [2024-07-26 16:41:12.850325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.240 [2024-07-26 16:41:12.859566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.240 [2024-07-26 16:41:12.860089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.240 [2024-07-26 16:41:12.860126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.240 [2024-07-26 16:41:12.860149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.240 [2024-07-26 16:41:12.860452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.240 [2024-07-26 16:41:12.860742] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.240 [2024-07-26 16:41:12.860780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.240 [2024-07-26 16:41:12.860803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.240 [2024-07-26 16:41:12.864958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.240 [2024-07-26 16:41:12.874245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.240 [2024-07-26 16:41:12.874755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.240 [2024-07-26 16:41:12.874797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.240 [2024-07-26 16:41:12.874823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.240 [2024-07-26 16:41:12.875125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.240 [2024-07-26 16:41:12.875415] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.240 [2024-07-26 16:41:12.875448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.240 [2024-07-26 16:41:12.875470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.240 [2024-07-26 16:41:12.879622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.240 [2024-07-26 16:41:12.888693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.240 [2024-07-26 16:41:12.889180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.240 [2024-07-26 16:41:12.889222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.240 [2024-07-26 16:41:12.889249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.240 [2024-07-26 16:41:12.889539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.240 [2024-07-26 16:41:12.889830] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.240 [2024-07-26 16:41:12.889862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.240 [2024-07-26 16:41:12.889885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.240 [2024-07-26 16:41:12.894039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.240 [2024-07-26 16:41:12.903325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.240 [2024-07-26 16:41:12.903858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.241 [2024-07-26 16:41:12.903907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.241 [2024-07-26 16:41:12.903934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.241 [2024-07-26 16:41:12.904235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.241 [2024-07-26 16:41:12.904531] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.241 [2024-07-26 16:41:12.904563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.241 [2024-07-26 16:41:12.904593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.241 [2024-07-26 16:41:12.908746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.241 [2024-07-26 16:41:12.917768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.241 [2024-07-26 16:41:12.918282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.241 [2024-07-26 16:41:12.918335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.241 [2024-07-26 16:41:12.918362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.241 [2024-07-26 16:41:12.918650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.241 [2024-07-26 16:41:12.918941] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.241 [2024-07-26 16:41:12.918972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.241 [2024-07-26 16:41:12.918995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.241 [2024-07-26 16:41:12.923202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.241 [2024-07-26 16:41:12.932221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.241 [2024-07-26 16:41:12.932758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.241 [2024-07-26 16:41:12.932811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.241 [2024-07-26 16:41:12.932838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.241 [2024-07-26 16:41:12.933141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.241 [2024-07-26 16:41:12.933431] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.241 [2024-07-26 16:41:12.933462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.241 [2024-07-26 16:41:12.933485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.241 [2024-07-26 16:41:12.937642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.241 [2024-07-26 16:41:12.946874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.241 [2024-07-26 16:41:12.947431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.241 [2024-07-26 16:41:12.947483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.241 [2024-07-26 16:41:12.947509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.241 [2024-07-26 16:41:12.947797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.241 [2024-07-26 16:41:12.948100] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.241 [2024-07-26 16:41:12.948132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.241 [2024-07-26 16:41:12.948160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.241 [2024-07-26 16:41:12.952309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.241 [2024-07-26 16:41:12.961331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.241 [2024-07-26 16:41:12.961835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.241 [2024-07-26 16:41:12.961884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.241 [2024-07-26 16:41:12.961916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.241 [2024-07-26 16:41:12.962214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.241 [2024-07-26 16:41:12.962504] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.241 [2024-07-26 16:41:12.962536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.241 [2024-07-26 16:41:12.962559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.241 [2024-07-26 16:41:12.966712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.241 [2024-07-26 16:41:12.975939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.241 [2024-07-26 16:41:12.976492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.241 [2024-07-26 16:41:12.976544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.241 [2024-07-26 16:41:12.976586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.241 [2024-07-26 16:41:12.976875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.241 [2024-07-26 16:41:12.977179] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.241 [2024-07-26 16:41:12.977211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.241 [2024-07-26 16:41:12.977234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.241 [2024-07-26 16:41:12.981382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.241 [2024-07-26 16:41:12.990385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.241 [2024-07-26 16:41:12.990912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.241 [2024-07-26 16:41:12.990960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.241 [2024-07-26 16:41:12.990986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.241 [2024-07-26 16:41:12.991285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.241 [2024-07-26 16:41:12.991575] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.241 [2024-07-26 16:41:12.991607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.241 [2024-07-26 16:41:12.991630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.500 [2024-07-26 16:41:12.995778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.500 [2024-07-26 16:41:13.005009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.500 [2024-07-26 16:41:13.005522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.500 [2024-07-26 16:41:13.005575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.500 [2024-07-26 16:41:13.005602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.500 [2024-07-26 16:41:13.005896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.500 [2024-07-26 16:41:13.006197] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.500 [2024-07-26 16:41:13.006235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.500 [2024-07-26 16:41:13.006263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.500 [2024-07-26 16:41:13.010387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.500 [2024-07-26 16:41:13.019598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.500 [2024-07-26 16:41:13.020201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.500 [2024-07-26 16:41:13.020249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.500 [2024-07-26 16:41:13.020275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.500 [2024-07-26 16:41:13.020563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.500 [2024-07-26 16:41:13.020852] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.500 [2024-07-26 16:41:13.020883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.500 [2024-07-26 16:41:13.020906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.500 [2024-07-26 16:41:13.025051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.500 [2024-07-26 16:41:13.034029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.500 [2024-07-26 16:41:13.034646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.500 [2024-07-26 16:41:13.034716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.500 [2024-07-26 16:41:13.034743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.500 [2024-07-26 16:41:13.035030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.500 [2024-07-26 16:41:13.035349] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.500 [2024-07-26 16:41:13.035381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.500 [2024-07-26 16:41:13.035404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.500 [2024-07-26 16:41:13.039550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.500 [2024-07-26 16:41:13.048519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.500 [2024-07-26 16:41:13.049123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.500 [2024-07-26 16:41:13.049189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.500 [2024-07-26 16:41:13.049215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.500 [2024-07-26 16:41:13.049502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.500 [2024-07-26 16:41:13.049790] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.500 [2024-07-26 16:41:13.049822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.500 [2024-07-26 16:41:13.049844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.500 [2024-07-26 16:41:13.053975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.500 [2024-07-26 16:41:13.062954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.500 [2024-07-26 16:41:13.063534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.500 [2024-07-26 16:41:13.063581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.500 [2024-07-26 16:41:13.063607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.500 [2024-07-26 16:41:13.063893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.500 [2024-07-26 16:41:13.064194] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.500 [2024-07-26 16:41:13.064227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.500 [2024-07-26 16:41:13.064251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.500 [2024-07-26 16:41:13.068389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.500 [2024-07-26 16:41:13.077601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.500 [2024-07-26 16:41:13.078101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.500 [2024-07-26 16:41:13.078152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.500 [2024-07-26 16:41:13.078178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.500 [2024-07-26 16:41:13.078466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.500 [2024-07-26 16:41:13.078756] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.500 [2024-07-26 16:41:13.078787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.500 [2024-07-26 16:41:13.078810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.500 [2024-07-26 16:41:13.082953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.500 [2024-07-26 16:41:13.092195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.500 [2024-07-26 16:41:13.092713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.500 [2024-07-26 16:41:13.092761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.500 [2024-07-26 16:41:13.092787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.500 [2024-07-26 16:41:13.093086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.500 [2024-07-26 16:41:13.093376] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.500 [2024-07-26 16:41:13.093408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.500 [2024-07-26 16:41:13.093431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.500 [2024-07-26 16:41:13.097570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.500 [2024-07-26 16:41:13.106778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.500 [2024-07-26 16:41:13.107313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.500 [2024-07-26 16:41:13.107355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.500 [2024-07-26 16:41:13.107390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.500 [2024-07-26 16:41:13.107680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.500 [2024-07-26 16:41:13.107971] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.500 [2024-07-26 16:41:13.108002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.500 [2024-07-26 16:41:13.108024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.501 [2024-07-26 16:41:13.112176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.501 [2024-07-26 16:41:13.121420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.501 [2024-07-26 16:41:13.121930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.501 [2024-07-26 16:41:13.121979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.501 [2024-07-26 16:41:13.122005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.501 [2024-07-26 16:41:13.122303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.501 [2024-07-26 16:41:13.122593] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.501 [2024-07-26 16:41:13.122625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.501 [2024-07-26 16:41:13.122647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.501 [2024-07-26 16:41:13.126761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.501 [2024-07-26 16:41:13.135964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.501 [2024-07-26 16:41:13.136468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.501 [2024-07-26 16:41:13.136510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.501 [2024-07-26 16:41:13.136536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.501 [2024-07-26 16:41:13.136822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.501 [2024-07-26 16:41:13.137125] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.501 [2024-07-26 16:41:13.137157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.501 [2024-07-26 16:41:13.137180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.501 [2024-07-26 16:41:13.141292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.501 [2024-07-26 16:41:13.150472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.501 [2024-07-26 16:41:13.150974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.501 [2024-07-26 16:41:13.151014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.501 [2024-07-26 16:41:13.151040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.501 [2024-07-26 16:41:13.151337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.501 [2024-07-26 16:41:13.151631] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.501 [2024-07-26 16:41:13.151663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.501 [2024-07-26 16:41:13.151686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.501 [2024-07-26 16:41:13.155806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.501 [2024-07-26 16:41:13.165022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.501 [2024-07-26 16:41:13.165513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.501 [2024-07-26 16:41:13.165555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.501 [2024-07-26 16:41:13.165582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.501 [2024-07-26 16:41:13.165869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.501 [2024-07-26 16:41:13.166173] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.501 [2024-07-26 16:41:13.166206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.501 [2024-07-26 16:41:13.166229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.501 [2024-07-26 16:41:13.170356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.501 [2024-07-26 16:41:13.179591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.501 [2024-07-26 16:41:13.180116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.501 [2024-07-26 16:41:13.180158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.501 [2024-07-26 16:41:13.180184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.501 [2024-07-26 16:41:13.180471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.501 [2024-07-26 16:41:13.180774] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.501 [2024-07-26 16:41:13.180807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.501 [2024-07-26 16:41:13.180830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.501 [2024-07-26 16:41:13.184997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.501 [2024-07-26 16:41:13.194226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.501 [2024-07-26 16:41:13.194712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.501 [2024-07-26 16:41:13.194755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.501 [2024-07-26 16:41:13.194781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.501 [2024-07-26 16:41:13.195087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.501 [2024-07-26 16:41:13.195378] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.501 [2024-07-26 16:41:13.195411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.501 [2024-07-26 16:41:13.195434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.501 [2024-07-26 16:41:13.199565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.501 [2024-07-26 16:41:13.208757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.501 [2024-07-26 16:41:13.209283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.501 [2024-07-26 16:41:13.209325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.501 [2024-07-26 16:41:13.209352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.501 [2024-07-26 16:41:13.209639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.501 [2024-07-26 16:41:13.209930] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.501 [2024-07-26 16:41:13.209962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.501 [2024-07-26 16:41:13.209985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.501 [2024-07-26 16:41:13.214127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.501 [2024-07-26 16:41:13.223329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.501 [2024-07-26 16:41:13.223827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.501 [2024-07-26 16:41:13.223869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.501 [2024-07-26 16:41:13.223895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.501 [2024-07-26 16:41:13.224197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.501 [2024-07-26 16:41:13.224497] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.501 [2024-07-26 16:41:13.224530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.501 [2024-07-26 16:41:13.224553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.501 [2024-07-26 16:41:13.228779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.501 [2024-07-26 16:41:13.237743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.501 [2024-07-26 16:41:13.238243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.501 [2024-07-26 16:41:13.238286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.501 [2024-07-26 16:41:13.238313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.501 [2024-07-26 16:41:13.238604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.501 [2024-07-26 16:41:13.238895] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.501 [2024-07-26 16:41:13.238928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.501 [2024-07-26 16:41:13.238950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.501 [2024-07-26 16:41:13.243084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.501 [2024-07-26 16:41:13.252296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.501 [2024-07-26 16:41:13.252785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.501 [2024-07-26 16:41:13.252828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.501 [2024-07-26 16:41:13.252864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.501 [2024-07-26 16:41:13.253165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.501 [2024-07-26 16:41:13.253455] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.501 [2024-07-26 16:41:13.253488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.502 [2024-07-26 16:41:13.253510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.502 [2024-07-26 16:41:13.257629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.758 [2024-07-26 16:41:13.266827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.758 [2024-07-26 16:41:13.267316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.758 [2024-07-26 16:41:13.267357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.758 [2024-07-26 16:41:13.267383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.758 [2024-07-26 16:41:13.267670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.758 [2024-07-26 16:41:13.267960] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.758 [2024-07-26 16:41:13.267994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.758 [2024-07-26 16:41:13.268017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.758 [2024-07-26 16:41:13.272139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.758 [2024-07-26 16:41:13.281320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.758 [2024-07-26 16:41:13.281794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.758 [2024-07-26 16:41:13.281836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.758 [2024-07-26 16:41:13.281862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.758 [2024-07-26 16:41:13.282162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.758 [2024-07-26 16:41:13.282449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.758 [2024-07-26 16:41:13.282483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.758 [2024-07-26 16:41:13.282506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.758 [2024-07-26 16:41:13.286665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.758 [2024-07-26 16:41:13.295901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.758 [2024-07-26 16:41:13.296436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.758 [2024-07-26 16:41:13.296478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.758 [2024-07-26 16:41:13.296504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.758 [2024-07-26 16:41:13.296791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.758 [2024-07-26 16:41:13.297100] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.758 [2024-07-26 16:41:13.297134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.758 [2024-07-26 16:41:13.297157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.758 [2024-07-26 16:41:13.301287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.758 [2024-07-26 16:41:13.310491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.758 [2024-07-26 16:41:13.310997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.758 [2024-07-26 16:41:13.311038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.758 [2024-07-26 16:41:13.311075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.758 [2024-07-26 16:41:13.311366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.758 [2024-07-26 16:41:13.311654] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.758 [2024-07-26 16:41:13.311687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.758 [2024-07-26 16:41:13.311710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.758 [2024-07-26 16:41:13.315841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.758 [2024-07-26 16:41:13.325089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.758 [2024-07-26 16:41:13.325619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.758 [2024-07-26 16:41:13.325662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.758 [2024-07-26 16:41:13.325688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.759 [2024-07-26 16:41:13.325977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.759 [2024-07-26 16:41:13.326283] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.759 [2024-07-26 16:41:13.326316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.759 [2024-07-26 16:41:13.326339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.759 [2024-07-26 16:41:13.330459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.759 [2024-07-26 16:41:13.339659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.759 [2024-07-26 16:41:13.340159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.759 [2024-07-26 16:41:13.340202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.759 [2024-07-26 16:41:13.340229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.759 [2024-07-26 16:41:13.340516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.759 [2024-07-26 16:41:13.340807] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.759 [2024-07-26 16:41:13.340838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.759 [2024-07-26 16:41:13.340861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.759 [2024-07-26 16:41:13.345004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.759 [2024-07-26 16:41:13.354215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.759 [2024-07-26 16:41:13.354703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.759 [2024-07-26 16:41:13.354745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.759 [2024-07-26 16:41:13.354772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.759 [2024-07-26 16:41:13.355072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.759 [2024-07-26 16:41:13.355364] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.759 [2024-07-26 16:41:13.355396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.759 [2024-07-26 16:41:13.355419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.759 [2024-07-26 16:41:13.359551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.759 [2024-07-26 16:41:13.368762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.759 [2024-07-26 16:41:13.369284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.759 [2024-07-26 16:41:13.369326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.759 [2024-07-26 16:41:13.369352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.759 [2024-07-26 16:41:13.369640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.759 [2024-07-26 16:41:13.369929] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.759 [2024-07-26 16:41:13.369961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.759 [2024-07-26 16:41:13.369983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.759 [2024-07-26 16:41:13.374125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.759 [2024-07-26 16:41:13.383328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.759 [2024-07-26 16:41:13.383834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.759 [2024-07-26 16:41:13.383876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.759 [2024-07-26 16:41:13.383902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.759 [2024-07-26 16:41:13.384204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.759 [2024-07-26 16:41:13.384492] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.759 [2024-07-26 16:41:13.384525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.759 [2024-07-26 16:41:13.384562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.759 [2024-07-26 16:41:13.388711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.759 [2024-07-26 16:41:13.397916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.759 [2024-07-26 16:41:13.398435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.759 [2024-07-26 16:41:13.398482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.759 [2024-07-26 16:41:13.398509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.759 [2024-07-26 16:41:13.398797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.759 [2024-07-26 16:41:13.399101] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.759 [2024-07-26 16:41:13.399135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.759 [2024-07-26 16:41:13.399157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.759 [2024-07-26 16:41:13.403286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.759 [2024-07-26 16:41:13.412503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.759 [2024-07-26 16:41:13.413027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.759 [2024-07-26 16:41:13.413078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.759 [2024-07-26 16:41:13.413106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.759 [2024-07-26 16:41:13.413394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.759 [2024-07-26 16:41:13.413684] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.759 [2024-07-26 16:41:13.413716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.759 [2024-07-26 16:41:13.413738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.759 [2024-07-26 16:41:13.417885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.759 [2024-07-26 16:41:13.427128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.759 [2024-07-26 16:41:13.427645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.759 [2024-07-26 16:41:13.427686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.759 [2024-07-26 16:41:13.427713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.759 [2024-07-26 16:41:13.428001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.759 [2024-07-26 16:41:13.428305] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.759 [2024-07-26 16:41:13.428338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.759 [2024-07-26 16:41:13.428361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.759 [2024-07-26 16:41:13.432497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.759 [2024-07-26 16:41:13.441711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.759 [2024-07-26 16:41:13.442220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.759 [2024-07-26 16:41:13.442262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.759 [2024-07-26 16:41:13.442289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.759 [2024-07-26 16:41:13.442577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.759 [2024-07-26 16:41:13.442874] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.759 [2024-07-26 16:41:13.442906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.759 [2024-07-26 16:41:13.442930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.759 [2024-07-26 16:41:13.447068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.759 [2024-07-26 16:41:13.456288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.759 [2024-07-26 16:41:13.456791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.759 [2024-07-26 16:41:13.456833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.759 [2024-07-26 16:41:13.456859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.759 [2024-07-26 16:41:13.457160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.759 [2024-07-26 16:41:13.457450] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.759 [2024-07-26 16:41:13.457483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.759 [2024-07-26 16:41:13.457506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.759 [2024-07-26 16:41:13.461645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.759 [2024-07-26 16:41:13.470875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.759 [2024-07-26 16:41:13.471373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.759 [2024-07-26 16:41:13.471415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.759 [2024-07-26 16:41:13.471441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.759 [2024-07-26 16:41:13.471728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.759 [2024-07-26 16:41:13.472017] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.759 [2024-07-26 16:41:13.472073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.759 [2024-07-26 16:41:13.472100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.759 [2024-07-26 16:41:13.476254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.759 [2024-07-26 16:41:13.485499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.759 [2024-07-26 16:41:13.486023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.759 [2024-07-26 16:41:13.486073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.759 [2024-07-26 16:41:13.486103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.759 [2024-07-26 16:41:13.486392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.759 [2024-07-26 16:41:13.486698] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.759 [2024-07-26 16:41:13.486738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.759 [2024-07-26 16:41:13.486760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.759 [2024-07-26 16:41:13.490899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.759 [2024-07-26 16:41:13.500160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.759 [2024-07-26 16:41:13.500662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.759 [2024-07-26 16:41:13.500704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.759 [2024-07-26 16:41:13.500730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.759 [2024-07-26 16:41:13.501018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.759 [2024-07-26 16:41:13.501322] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.759 [2024-07-26 16:41:13.501354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.759 [2024-07-26 16:41:13.501376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.759 [2024-07-26 16:41:13.505503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.759 [2024-07-26 16:41:13.514707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:53.759 [2024-07-26 16:41:13.515219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.759 [2024-07-26 16:41:13.515260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:53.759 [2024-07-26 16:41:13.515286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:53.759 [2024-07-26 16:41:13.515574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:53.759 [2024-07-26 16:41:13.515862] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:53.759 [2024-07-26 16:41:13.515894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:53.759 [2024-07-26 16:41:13.515916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:53.759 [2024-07-26 16:41:13.520056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.017 [2024-07-26 16:41:13.529293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.017 [2024-07-26 16:41:13.529782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.017 [2024-07-26 16:41:13.529823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.017 [2024-07-26 16:41:13.529849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.017 [2024-07-26 16:41:13.530152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.017 [2024-07-26 16:41:13.530441] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.017 [2024-07-26 16:41:13.530473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.017 [2024-07-26 16:41:13.530496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.017 [2024-07-26 16:41:13.534626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.017 [2024-07-26 16:41:13.543839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.017 [2024-07-26 16:41:13.544358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.017 [2024-07-26 16:41:13.544407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.017 [2024-07-26 16:41:13.544434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.017 [2024-07-26 16:41:13.544722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.017 [2024-07-26 16:41:13.545012] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.017 [2024-07-26 16:41:13.545044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.017 [2024-07-26 16:41:13.545079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.017 [2024-07-26 16:41:13.549218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.017 [2024-07-26 16:41:13.558474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.017 [2024-07-26 16:41:13.558987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.017 [2024-07-26 16:41:13.559028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.017 [2024-07-26 16:41:13.559054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.017 [2024-07-26 16:41:13.559354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.017 [2024-07-26 16:41:13.559644] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.017 [2024-07-26 16:41:13.559676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.017 [2024-07-26 16:41:13.559698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.017 [2024-07-26 16:41:13.563839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.017 [2024-07-26 16:41:13.573079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.017 [2024-07-26 16:41:13.573579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.017 [2024-07-26 16:41:13.573621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.017 [2024-07-26 16:41:13.573647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.017 [2024-07-26 16:41:13.573934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.017 [2024-07-26 16:41:13.574238] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.017 [2024-07-26 16:41:13.574270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.017 [2024-07-26 16:41:13.574292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.017 [2024-07-26 16:41:13.578434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.017 [2024-07-26 16:41:13.587690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.017 [2024-07-26 16:41:13.588195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.017 [2024-07-26 16:41:13.588237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.017 [2024-07-26 16:41:13.588263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.017 [2024-07-26 16:41:13.588550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.017 [2024-07-26 16:41:13.588844] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.017 [2024-07-26 16:41:13.588877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.017 [2024-07-26 16:41:13.588900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.017 [2024-07-26 16:41:13.593028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.017 [2024-07-26 16:41:13.602306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.017 [2024-07-26 16:41:13.602926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.017 [2024-07-26 16:41:13.602995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.017 [2024-07-26 16:41:13.603022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.017 [2024-07-26 16:41:13.603322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.017 [2024-07-26 16:41:13.603612] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.017 [2024-07-26 16:41:13.603655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.017 [2024-07-26 16:41:13.603678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.017 [2024-07-26 16:41:13.607827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.017 [2024-07-26 16:41:13.616809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.017 [2024-07-26 16:41:13.617305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.017 [2024-07-26 16:41:13.617346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.017 [2024-07-26 16:41:13.617372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.017 [2024-07-26 16:41:13.617659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.017 [2024-07-26 16:41:13.617949] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.017 [2024-07-26 16:41:13.617982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.017 [2024-07-26 16:41:13.618005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.017 [2024-07-26 16:41:13.622158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.017 [2024-07-26 16:41:13.631392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.017 [2024-07-26 16:41:13.632021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.018 [2024-07-26 16:41:13.632090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.018 [2024-07-26 16:41:13.632118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.018 [2024-07-26 16:41:13.632405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.018 [2024-07-26 16:41:13.632694] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.018 [2024-07-26 16:41:13.632727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.018 [2024-07-26 16:41:13.632755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.018 [2024-07-26 16:41:13.636903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.018 [2024-07-26 16:41:13.645894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.018 [2024-07-26 16:41:13.646428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.018 [2024-07-26 16:41:13.646469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.018 [2024-07-26 16:41:13.646496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.018 [2024-07-26 16:41:13.646784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.018 [2024-07-26 16:41:13.647085] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.018 [2024-07-26 16:41:13.647117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.018 [2024-07-26 16:41:13.647139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.018 [2024-07-26 16:41:13.651282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.018 [2024-07-26 16:41:13.660475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.018 [2024-07-26 16:41:13.660978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.018 [2024-07-26 16:41:13.661019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.018 [2024-07-26 16:41:13.661045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.018 [2024-07-26 16:41:13.661347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.018 [2024-07-26 16:41:13.661636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.018 [2024-07-26 16:41:13.661669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.018 [2024-07-26 16:41:13.661692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.018 [2024-07-26 16:41:13.665817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.018 [2024-07-26 16:41:13.675032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.018 [2024-07-26 16:41:13.675686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.018 [2024-07-26 16:41:13.675744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.018 [2024-07-26 16:41:13.675771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.018 [2024-07-26 16:41:13.676070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.018 [2024-07-26 16:41:13.676362] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.018 [2024-07-26 16:41:13.676395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.018 [2024-07-26 16:41:13.676417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.018 [2024-07-26 16:41:13.680551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.018 [2024-07-26 16:41:13.689545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.018 [2024-07-26 16:41:13.690056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.018 [2024-07-26 16:41:13.690113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.018 [2024-07-26 16:41:13.690140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.018 [2024-07-26 16:41:13.690429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.018 [2024-07-26 16:41:13.690718] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.018 [2024-07-26 16:41:13.690750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.018 [2024-07-26 16:41:13.690772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.018 [2024-07-26 16:41:13.694911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.018 [2024-07-26 16:41:13.704145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.018 [2024-07-26 16:41:13.704651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.018 [2024-07-26 16:41:13.704692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.018 [2024-07-26 16:41:13.704718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.018 [2024-07-26 16:41:13.705005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.018 [2024-07-26 16:41:13.705308] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.018 [2024-07-26 16:41:13.705340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.018 [2024-07-26 16:41:13.705362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.018 [2024-07-26 16:41:13.709491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.018 [2024-07-26 16:41:13.718698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.018 [2024-07-26 16:41:13.719211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.018 [2024-07-26 16:41:13.719253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.018 [2024-07-26 16:41:13.719279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.018 [2024-07-26 16:41:13.719566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.018 [2024-07-26 16:41:13.719857] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.018 [2024-07-26 16:41:13.719891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.018 [2024-07-26 16:41:13.719914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.018 [2024-07-26 16:41:13.724048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.018 [2024-07-26 16:41:13.733292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.018 [2024-07-26 16:41:13.733803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.018 [2024-07-26 16:41:13.733845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.018 [2024-07-26 16:41:13.733871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.018 [2024-07-26 16:41:13.734173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.018 [2024-07-26 16:41:13.734467] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.018 [2024-07-26 16:41:13.734499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.018 [2024-07-26 16:41:13.734522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.018 [2024-07-26 16:41:13.738645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.018 [2024-07-26 16:41:13.747878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.018 [2024-07-26 16:41:13.748403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.018 [2024-07-26 16:41:13.748444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.018 [2024-07-26 16:41:13.748470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.018 [2024-07-26 16:41:13.748759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.018 [2024-07-26 16:41:13.749047] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.018 [2024-07-26 16:41:13.749093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.018 [2024-07-26 16:41:13.749117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.018 [2024-07-26 16:41:13.753252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.018 [2024-07-26 16:41:13.762373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.018 [2024-07-26 16:41:13.762919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.018 [2024-07-26 16:41:13.762961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.018 [2024-07-26 16:41:13.762987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.018 [2024-07-26 16:41:13.763285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.018 [2024-07-26 16:41:13.763574] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.018 [2024-07-26 16:41:13.763607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.018 [2024-07-26 16:41:13.763630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.018 [2024-07-26 16:41:13.767753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.018 [2024-07-26 16:41:13.776965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.018 [2024-07-26 16:41:13.777458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.019 [2024-07-26 16:41:13.777500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.019 [2024-07-26 16:41:13.777526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.019 [2024-07-26 16:41:13.777812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.019 [2024-07-26 16:41:13.778115] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.019 [2024-07-26 16:41:13.778148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.019 [2024-07-26 16:41:13.778177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.276 [2024-07-26 16:41:13.782313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.276 [2024-07-26 16:41:13.791552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.276 [2024-07-26 16:41:13.792077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-26 16:41:13.792126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.276 [2024-07-26 16:41:13.792151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.276 [2024-07-26 16:41:13.792439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.276 [2024-07-26 16:41:13.792729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.276 [2024-07-26 16:41:13.792762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.276 [2024-07-26 16:41:13.792786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.276 [2024-07-26 16:41:13.796916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.276 [2024-07-26 16:41:13.806139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.276 [2024-07-26 16:41:13.806746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-26 16:41:13.806846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.276 [2024-07-26 16:41:13.806873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.276 [2024-07-26 16:41:13.807175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.276 [2024-07-26 16:41:13.807464] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.276 [2024-07-26 16:41:13.807496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.277 [2024-07-26 16:41:13.807519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.277 [2024-07-26 16:41:13.811665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.277 [2024-07-26 16:41:13.820653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.277 [2024-07-26 16:41:13.821156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-26 16:41:13.821199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.277 [2024-07-26 16:41:13.821225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.277 [2024-07-26 16:41:13.821513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.277 [2024-07-26 16:41:13.821800] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.277 [2024-07-26 16:41:13.821833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.277 [2024-07-26 16:41:13.821856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.277 [2024-07-26 16:41:13.825998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.277 [2024-07-26 16:41:13.835228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.277 [2024-07-26 16:41:13.835777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-26 16:41:13.835819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.277 [2024-07-26 16:41:13.835845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.277 [2024-07-26 16:41:13.836150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.277 [2024-07-26 16:41:13.836447] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.277 [2024-07-26 16:41:13.836479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.277 [2024-07-26 16:41:13.836502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.277 [2024-07-26 16:41:13.840631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.277 [2024-07-26 16:41:13.849843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.277 [2024-07-26 16:41:13.850365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-26 16:41:13.850407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.277 [2024-07-26 16:41:13.850433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.277 [2024-07-26 16:41:13.850719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.277 [2024-07-26 16:41:13.851009] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.277 [2024-07-26 16:41:13.851041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.277 [2024-07-26 16:41:13.851078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.277 [2024-07-26 16:41:13.855220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.277 [2024-07-26 16:41:13.864447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.277 [2024-07-26 16:41:13.864955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-26 16:41:13.864996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.277 [2024-07-26 16:41:13.865022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.277 [2024-07-26 16:41:13.865318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.277 [2024-07-26 16:41:13.865608] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.277 [2024-07-26 16:41:13.865640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.277 [2024-07-26 16:41:13.865664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.277 [2024-07-26 16:41:13.869801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.277 [2024-07-26 16:41:13.879026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.277 [2024-07-26 16:41:13.879548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-26 16:41:13.879589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.277 [2024-07-26 16:41:13.879615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.277 [2024-07-26 16:41:13.879908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.277 [2024-07-26 16:41:13.880214] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.277 [2024-07-26 16:41:13.880247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.277 [2024-07-26 16:41:13.880270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.277 [2024-07-26 16:41:13.884392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.277 [2024-07-26 16:41:13.893625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.277 [2024-07-26 16:41:13.894130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-26 16:41:13.894171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.277 [2024-07-26 16:41:13.894198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.277 [2024-07-26 16:41:13.894484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.277 [2024-07-26 16:41:13.894774] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.277 [2024-07-26 16:41:13.894806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.277 [2024-07-26 16:41:13.894828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.277 [2024-07-26 16:41:13.898955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.277 [2024-07-26 16:41:13.908202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.277 [2024-07-26 16:41:13.908723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-26 16:41:13.908765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.277 [2024-07-26 16:41:13.908790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.277 [2024-07-26 16:41:13.909090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.277 [2024-07-26 16:41:13.909378] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.277 [2024-07-26 16:41:13.909412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.277 [2024-07-26 16:41:13.909435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.277 [2024-07-26 16:41:13.913574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.277 [2024-07-26 16:41:13.922799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.277 [2024-07-26 16:41:13.923319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-26 16:41:13.923359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.277 [2024-07-26 16:41:13.923385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.277 [2024-07-26 16:41:13.923671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.277 [2024-07-26 16:41:13.923962] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.277 [2024-07-26 16:41:13.923994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.277 [2024-07-26 16:41:13.924023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.277 [2024-07-26 16:41:13.928171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.277 [2024-07-26 16:41:13.937383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.277 [2024-07-26 16:41:13.937883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-26 16:41:13.937924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.277 [2024-07-26 16:41:13.937950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.277 [2024-07-26 16:41:13.938254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.277 [2024-07-26 16:41:13.938544] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.277 [2024-07-26 16:41:13.938577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.277 [2024-07-26 16:41:13.938600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.277 [2024-07-26 16:41:13.942730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.277 [2024-07-26 16:41:13.951962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.277 [2024-07-26 16:41:13.952458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-26 16:41:13.952500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.277 [2024-07-26 16:41:13.952527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.277 [2024-07-26 16:41:13.952813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.278 [2024-07-26 16:41:13.953116] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.278 [2024-07-26 16:41:13.953150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.278 [2024-07-26 16:41:13.953173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.278 [2024-07-26 16:41:13.957289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.278 [2024-07-26 16:41:13.966491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.278 [2024-07-26 16:41:13.967075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-26 16:41:13.967117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.278 [2024-07-26 16:41:13.967143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.278 [2024-07-26 16:41:13.967430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.278 [2024-07-26 16:41:13.967719] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.278 [2024-07-26 16:41:13.967752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.278 [2024-07-26 16:41:13.967775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.278 [2024-07-26 16:41:13.971904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.278 [2024-07-26 16:41:13.981121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.278 [2024-07-26 16:41:13.981597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-26 16:41:13.981638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.278 [2024-07-26 16:41:13.981664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.278 [2024-07-26 16:41:13.981951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.278 [2024-07-26 16:41:13.982257] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.278 [2024-07-26 16:41:13.982290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.278 [2024-07-26 16:41:13.982313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.278 [2024-07-26 16:41:13.986449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.278 [2024-07-26 16:41:13.995676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.278 [2024-07-26 16:41:13.996202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-26 16:41:13.996244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.278 [2024-07-26 16:41:13.996270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.278 [2024-07-26 16:41:13.996558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.278 [2024-07-26 16:41:13.996845] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.278 [2024-07-26 16:41:13.996877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.278 [2024-07-26 16:41:13.996899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.278 [2024-07-26 16:41:14.001028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.278 [2024-07-26 16:41:14.010282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.278 [2024-07-26 16:41:14.010793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-26 16:41:14.010834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.278 [2024-07-26 16:41:14.010860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.278 [2024-07-26 16:41:14.011177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.278 [2024-07-26 16:41:14.011466] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.278 [2024-07-26 16:41:14.011498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.278 [2024-07-26 16:41:14.011520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.278 [2024-07-26 16:41:14.015655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.278 [2024-07-26 16:41:14.024878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.278 [2024-07-26 16:41:14.025403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-26 16:41:14.025446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.278 [2024-07-26 16:41:14.025473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.278 [2024-07-26 16:41:14.025765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.278 [2024-07-26 16:41:14.026056] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.278 [2024-07-26 16:41:14.026098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.278 [2024-07-26 16:41:14.026121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.278 [2024-07-26 16:41:14.030240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.536 [2024-07-26 16:41:14.039424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.536 [2024-07-26 16:41:14.040072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.536 [2024-07-26 16:41:14.040114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.536 [2024-07-26 16:41:14.040140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.536 [2024-07-26 16:41:14.040429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.536 [2024-07-26 16:41:14.040717] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.536 [2024-07-26 16:41:14.040750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.536 [2024-07-26 16:41:14.040772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.536 [2024-07-26 16:41:14.044899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.536 [2024-07-26 16:41:14.053861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.536 [2024-07-26 16:41:14.054384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.536 [2024-07-26 16:41:14.054426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.537 [2024-07-26 16:41:14.054452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.537 [2024-07-26 16:41:14.054740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.537 [2024-07-26 16:41:14.055029] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.537 [2024-07-26 16:41:14.055070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.537 [2024-07-26 16:41:14.055095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.537 [2024-07-26 16:41:14.059231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.537 [2024-07-26 16:41:14.068462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.537 [2024-07-26 16:41:14.069122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.537 [2024-07-26 16:41:14.069164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.537 [2024-07-26 16:41:14.069191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.537 [2024-07-26 16:41:14.069479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.537 [2024-07-26 16:41:14.069769] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.537 [2024-07-26 16:41:14.069802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.537 [2024-07-26 16:41:14.069831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.537 [2024-07-26 16:41:14.073967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.537 [2024-07-26 16:41:14.082966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.537 [2024-07-26 16:41:14.083487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.537 [2024-07-26 16:41:14.083529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.537 [2024-07-26 16:41:14.083556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.537 [2024-07-26 16:41:14.083842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.537 [2024-07-26 16:41:14.084143] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.537 [2024-07-26 16:41:14.084175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.537 [2024-07-26 16:41:14.084198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.537 [2024-07-26 16:41:14.088319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.537 [2024-07-26 16:41:14.097543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.537 [2024-07-26 16:41:14.098017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.537 [2024-07-26 16:41:14.098068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.537 [2024-07-26 16:41:14.098097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.537 [2024-07-26 16:41:14.098384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.537 [2024-07-26 16:41:14.098673] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.537 [2024-07-26 16:41:14.098705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.537 [2024-07-26 16:41:14.098728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.537 [2024-07-26 16:41:14.102852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.537 [2024-07-26 16:41:14.112072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.537 [2024-07-26 16:41:14.112559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.537 [2024-07-26 16:41:14.112600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.537 [2024-07-26 16:41:14.112626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.537 [2024-07-26 16:41:14.112914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.537 [2024-07-26 16:41:14.113216] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.537 [2024-07-26 16:41:14.113248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.537 [2024-07-26 16:41:14.113271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.537 [2024-07-26 16:41:14.117396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.537 [2024-07-26 16:41:14.126606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.537 [2024-07-26 16:41:14.127123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.537 [2024-07-26 16:41:14.127164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.537 [2024-07-26 16:41:14.127190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.537 [2024-07-26 16:41:14.127476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.537 [2024-07-26 16:41:14.127766] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.537 [2024-07-26 16:41:14.127798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.537 [2024-07-26 16:41:14.127820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.537 [2024-07-26 16:41:14.131951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.537 [2024-07-26 16:41:14.141182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.537 [2024-07-26 16:41:14.141701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.537 [2024-07-26 16:41:14.141743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.537 [2024-07-26 16:41:14.141769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.537 [2024-07-26 16:41:14.142056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.537 [2024-07-26 16:41:14.142367] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.537 [2024-07-26 16:41:14.142399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.537 [2024-07-26 16:41:14.142421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.537 [2024-07-26 16:41:14.146561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.537 [2024-07-26 16:41:14.155780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.537 [2024-07-26 16:41:14.156297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.537 [2024-07-26 16:41:14.156339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.537 [2024-07-26 16:41:14.156365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.537 [2024-07-26 16:41:14.156652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.537 [2024-07-26 16:41:14.156942] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.537 [2024-07-26 16:41:14.156975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.537 [2024-07-26 16:41:14.156998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.537 [2024-07-26 16:41:14.161131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.537 [2024-07-26 16:41:14.170335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.537 [2024-07-26 16:41:14.170846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.537 [2024-07-26 16:41:14.170887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.537 [2024-07-26 16:41:14.170913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.537 [2024-07-26 16:41:14.171217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.537 [2024-07-26 16:41:14.171506] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.537 [2024-07-26 16:41:14.171538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.537 [2024-07-26 16:41:14.171561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.537 [2024-07-26 16:41:14.175687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.537 [2024-07-26 16:41:14.184932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.537 [2024-07-26 16:41:14.185431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.537 [2024-07-26 16:41:14.185473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.537 [2024-07-26 16:41:14.185498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.537 [2024-07-26 16:41:14.185785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.537 [2024-07-26 16:41:14.186084] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.538 [2024-07-26 16:41:14.186120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.538 [2024-07-26 16:41:14.186142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.538 [2024-07-26 16:41:14.190318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.538 [2024-07-26 16:41:14.199525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.538 [2024-07-26 16:41:14.200025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.538 [2024-07-26 16:41:14.200072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.538 [2024-07-26 16:41:14.200109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.538 [2024-07-26 16:41:14.200396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.538 [2024-07-26 16:41:14.200686] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.538 [2024-07-26 16:41:14.200718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.538 [2024-07-26 16:41:14.200740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.538 [2024-07-26 16:41:14.204891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.538 [2024-07-26 16:41:14.214108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.538 [2024-07-26 16:41:14.214609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.538 [2024-07-26 16:41:14.214650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.538 [2024-07-26 16:41:14.214675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.538 [2024-07-26 16:41:14.214963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.538 [2024-07-26 16:41:14.215262] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.538 [2024-07-26 16:41:14.215305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.538 [2024-07-26 16:41:14.215334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.538 [2024-07-26 16:41:14.219466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.538 [2024-07-26 16:41:14.228668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.538 [2024-07-26 16:41:14.229191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.538 [2024-07-26 16:41:14.229233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.538 [2024-07-26 16:41:14.229259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.538 [2024-07-26 16:41:14.229547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.538 [2024-07-26 16:41:14.229835] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.538 [2024-07-26 16:41:14.229867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.538 [2024-07-26 16:41:14.229889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.538 [2024-07-26 16:41:14.234016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.538 [2024-07-26 16:41:14.243229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.538 [2024-07-26 16:41:14.243731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.538 [2024-07-26 16:41:14.243772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.538 [2024-07-26 16:41:14.243798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.538 [2024-07-26 16:41:14.244097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.538 [2024-07-26 16:41:14.244387] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.538 [2024-07-26 16:41:14.244419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.538 [2024-07-26 16:41:14.244442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.538 [2024-07-26 16:41:14.248558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.538 [2024-07-26 16:41:14.257859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.538 [2024-07-26 16:41:14.258394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.538 [2024-07-26 16:41:14.258438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.538 [2024-07-26 16:41:14.258465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.538 [2024-07-26 16:41:14.258755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.538 [2024-07-26 16:41:14.259044] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.538 [2024-07-26 16:41:14.259088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.538 [2024-07-26 16:41:14.259111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.538 [2024-07-26 16:41:14.263245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.538 [2024-07-26 16:41:14.272424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.538 [2024-07-26 16:41:14.272905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.538 [2024-07-26 16:41:14.272946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.538 [2024-07-26 16:41:14.272972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.538 [2024-07-26 16:41:14.273272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.538 [2024-07-26 16:41:14.273560] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.538 [2024-07-26 16:41:14.273592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.538 [2024-07-26 16:41:14.273615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.538 [2024-07-26 16:41:14.277742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.538 [2024-07-26 16:41:14.286941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.538 [2024-07-26 16:41:14.287459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.538 [2024-07-26 16:41:14.287500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.538 [2024-07-26 16:41:14.287526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.538 [2024-07-26 16:41:14.287812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.538 [2024-07-26 16:41:14.288114] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.538 [2024-07-26 16:41:14.288146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.538 [2024-07-26 16:41:14.288168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.538 [2024-07-26 16:41:14.292314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.797 [2024-07-26 16:41:14.301504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.797 [2024-07-26 16:41:14.302046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 16:41:14.302096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.797 [2024-07-26 16:41:14.302123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.797 [2024-07-26 16:41:14.302411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.797 [2024-07-26 16:41:14.302700] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.797 [2024-07-26 16:41:14.302732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.797 [2024-07-26 16:41:14.302754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.797 [2024-07-26 16:41:14.306868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.797 [2024-07-26 16:41:14.316095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.797 [2024-07-26 16:41:14.316574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 16:41:14.316616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.797 [2024-07-26 16:41:14.316643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.797 [2024-07-26 16:41:14.316936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.797 [2024-07-26 16:41:14.317240] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.797 [2024-07-26 16:41:14.317273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.797 [2024-07-26 16:41:14.317295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.797 [2024-07-26 16:41:14.321417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.797 [2024-07-26 16:41:14.330611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.797 [2024-07-26 16:41:14.331113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 16:41:14.331155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.797 [2024-07-26 16:41:14.331181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.797 [2024-07-26 16:41:14.331470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.797 [2024-07-26 16:41:14.331759] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.797 [2024-07-26 16:41:14.331791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.797 [2024-07-26 16:41:14.331813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.797 [2024-07-26 16:41:14.335951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.797 [2024-07-26 16:41:14.345164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.797 [2024-07-26 16:41:14.345678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 16:41:14.345719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.797 [2024-07-26 16:41:14.345745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.797 [2024-07-26 16:41:14.346034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.797 [2024-07-26 16:41:14.346333] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.797 [2024-07-26 16:41:14.346365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.797 [2024-07-26 16:41:14.346387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.797 [2024-07-26 16:41:14.350516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.797 [2024-07-26 16:41:14.359703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.797 [2024-07-26 16:41:14.360216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 16:41:14.360257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.797 [2024-07-26 16:41:14.360284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.797 [2024-07-26 16:41:14.360573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.797 [2024-07-26 16:41:14.360862] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.797 [2024-07-26 16:41:14.360899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.797 [2024-07-26 16:41:14.360922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.797 [2024-07-26 16:41:14.365057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.797 [2024-07-26 16:41:14.374263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.797 [2024-07-26 16:41:14.374785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 16:41:14.374826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.797 [2024-07-26 16:41:14.374852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.798 [2024-07-26 16:41:14.375154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.798 [2024-07-26 16:41:14.375444] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.798 [2024-07-26 16:41:14.375476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.798 [2024-07-26 16:41:14.375499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.798 [2024-07-26 16:41:14.379620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.798 [2024-07-26 16:41:14.388801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.798 [2024-07-26 16:41:14.389317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 16:41:14.389358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.798 [2024-07-26 16:41:14.389384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.798 [2024-07-26 16:41:14.389671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.798 [2024-07-26 16:41:14.389975] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.798 [2024-07-26 16:41:14.390007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.798 [2024-07-26 16:41:14.390029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.798 [2024-07-26 16:41:14.394170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.798 [2024-07-26 16:41:14.403360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.798 [2024-07-26 16:41:14.403872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 16:41:14.403914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.798 [2024-07-26 16:41:14.403940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.798 [2024-07-26 16:41:14.404239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.798 [2024-07-26 16:41:14.404529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.798 [2024-07-26 16:41:14.404560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.798 [2024-07-26 16:41:14.404583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.798 [2024-07-26 16:41:14.408714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.798 [2024-07-26 16:41:14.417931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.798 [2024-07-26 16:41:14.418425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 16:41:14.418467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.798 [2024-07-26 16:41:14.418493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.798 [2024-07-26 16:41:14.418780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.798 [2024-07-26 16:41:14.419081] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.798 [2024-07-26 16:41:14.419114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.798 [2024-07-26 16:41:14.419136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.798 [2024-07-26 16:41:14.423285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.798 [2024-07-26 16:41:14.432499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.798 [2024-07-26 16:41:14.433000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 16:41:14.433040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.798 [2024-07-26 16:41:14.433077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.798 [2024-07-26 16:41:14.433365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.798 [2024-07-26 16:41:14.433655] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.798 [2024-07-26 16:41:14.433686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.798 [2024-07-26 16:41:14.433709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.798 [2024-07-26 16:41:14.437833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.798 [2024-07-26 16:41:14.447043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.798 [2024-07-26 16:41:14.447548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 16:41:14.447590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.798 [2024-07-26 16:41:14.447616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.798 [2024-07-26 16:41:14.447902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.798 [2024-07-26 16:41:14.448204] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.798 [2024-07-26 16:41:14.448237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.798 [2024-07-26 16:41:14.448258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.798 [2024-07-26 16:41:14.452384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.798 [2024-07-26 16:41:14.461598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.798 [2024-07-26 16:41:14.462131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 16:41:14.462173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.798 [2024-07-26 16:41:14.462199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.798 [2024-07-26 16:41:14.462496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.798 [2024-07-26 16:41:14.462786] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.798 [2024-07-26 16:41:14.462818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.798 [2024-07-26 16:41:14.462840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.798 [2024-07-26 16:41:14.466974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.798 [2024-07-26 16:41:14.476196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.798 [2024-07-26 16:41:14.476695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 16:41:14.476736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.798 [2024-07-26 16:41:14.476762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.798 [2024-07-26 16:41:14.477049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.798 [2024-07-26 16:41:14.477351] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.798 [2024-07-26 16:41:14.477382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.798 [2024-07-26 16:41:14.477405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.798 [2024-07-26 16:41:14.481534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.798 [2024-07-26 16:41:14.490764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.798 [2024-07-26 16:41:14.491261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 16:41:14.491302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.798 [2024-07-26 16:41:14.491328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.798 [2024-07-26 16:41:14.491617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.798 [2024-07-26 16:41:14.491906] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.798 [2024-07-26 16:41:14.491938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.798 [2024-07-26 16:41:14.491961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.798 [2024-07-26 16:41:14.496089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.798 [2024-07-26 16:41:14.505285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.798 [2024-07-26 16:41:14.505784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 16:41:14.505825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.798 [2024-07-26 16:41:14.505850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.798 [2024-07-26 16:41:14.506150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.798 [2024-07-26 16:41:14.506440] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.798 [2024-07-26 16:41:14.506478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.798 [2024-07-26 16:41:14.506503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.798 [2024-07-26 16:41:14.510641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.798 [2024-07-26 16:41:14.519844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.798 [2024-07-26 16:41:14.520354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 16:41:14.520396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.799 [2024-07-26 16:41:14.520422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.799 [2024-07-26 16:41:14.520709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.799 [2024-07-26 16:41:14.520998] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.799 [2024-07-26 16:41:14.521029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.799 [2024-07-26 16:41:14.521051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.799 [2024-07-26 16:41:14.525196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.799 [2024-07-26 16:41:14.534406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.799 [2024-07-26 16:41:14.534998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 16:41:14.535039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.799 [2024-07-26 16:41:14.535073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.799 [2024-07-26 16:41:14.535363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.799 [2024-07-26 16:41:14.535652] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.799 [2024-07-26 16:41:14.535684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.799 [2024-07-26 16:41:14.535706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.799 [2024-07-26 16:41:14.539835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:54.799 [2024-07-26 16:41:14.549036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.799 [2024-07-26 16:41:14.549547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 16:41:14.549588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:54.799 [2024-07-26 16:41:14.549614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:54.799 [2024-07-26 16:41:14.549901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:54.799 [2024-07-26 16:41:14.550203] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:54.799 [2024-07-26 16:41:14.550235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:54.799 [2024-07-26 16:41:14.550257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.799 [2024-07-26 16:41:14.554386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.057 [2024-07-26 16:41:14.563596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.057 [2024-07-26 16:41:14.564075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-07-26 16:41:14.564116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.058 [2024-07-26 16:41:14.564142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.058 [2024-07-26 16:41:14.564430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.058 [2024-07-26 16:41:14.564719] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.058 [2024-07-26 16:41:14.564751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.058 [2024-07-26 16:41:14.564774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.058 [2024-07-26 16:41:14.568900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.058 [2024-07-26 16:41:14.578110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.058 [2024-07-26 16:41:14.578619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-07-26 16:41:14.578660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.058 [2024-07-26 16:41:14.578686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.058 [2024-07-26 16:41:14.578972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.058 [2024-07-26 16:41:14.579273] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.058 [2024-07-26 16:41:14.579305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.058 [2024-07-26 16:41:14.579327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.058 [2024-07-26 16:41:14.583448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.058 [2024-07-26 16:41:14.592666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.058 [2024-07-26 16:41:14.593176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-07-26 16:41:14.593217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.058 [2024-07-26 16:41:14.593243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.058 [2024-07-26 16:41:14.593531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.058 [2024-07-26 16:41:14.593819] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.058 [2024-07-26 16:41:14.593851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.058 [2024-07-26 16:41:14.593873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.058 [2024-07-26 16:41:14.598002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.058 [2024-07-26 16:41:14.607237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.058 [2024-07-26 16:41:14.607751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-07-26 16:41:14.607792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.058 [2024-07-26 16:41:14.607824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.058 [2024-07-26 16:41:14.608125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.058 [2024-07-26 16:41:14.608413] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.058 [2024-07-26 16:41:14.608445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.058 [2024-07-26 16:41:14.608468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.058 [2024-07-26 16:41:14.612593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.058 [2024-07-26 16:41:14.621806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.058 [2024-07-26 16:41:14.622320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-07-26 16:41:14.622361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.058 [2024-07-26 16:41:14.622388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.058 [2024-07-26 16:41:14.622673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.058 [2024-07-26 16:41:14.622963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.058 [2024-07-26 16:41:14.622995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.058 [2024-07-26 16:41:14.623017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.058 [2024-07-26 16:41:14.627154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.058 [2024-07-26 16:41:14.636420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.058 [2024-07-26 16:41:14.637035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-07-26 16:41:14.637124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.058 [2024-07-26 16:41:14.637151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.058 [2024-07-26 16:41:14.637437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.058 [2024-07-26 16:41:14.637725] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.058 [2024-07-26 16:41:14.637757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.058 [2024-07-26 16:41:14.637779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.058 [2024-07-26 16:41:14.641925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.058 [2024-07-26 16:41:14.650868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.058 [2024-07-26 16:41:14.651380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-07-26 16:41:14.651422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.058 [2024-07-26 16:41:14.651448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.058 [2024-07-26 16:41:14.651734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.058 [2024-07-26 16:41:14.652024] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.058 [2024-07-26 16:41:14.652071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.058 [2024-07-26 16:41:14.652096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.058 [2024-07-26 16:41:14.656231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.058 [2024-07-26 16:41:14.665425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.058 [2024-07-26 16:41:14.665901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-07-26 16:41:14.665943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.058 [2024-07-26 16:41:14.665969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.058 [2024-07-26 16:41:14.666266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.058 [2024-07-26 16:41:14.666557] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.058 [2024-07-26 16:41:14.666589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.058 [2024-07-26 16:41:14.666611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.058 [2024-07-26 16:41:14.670730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.058 [2024-07-26 16:41:14.679929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.058 [2024-07-26 16:41:14.680478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-07-26 16:41:14.680520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.058 [2024-07-26 16:41:14.680546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.058 [2024-07-26 16:41:14.680835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.058 [2024-07-26 16:41:14.681137] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.058 [2024-07-26 16:41:14.681170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.058 [2024-07-26 16:41:14.681193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.058 [2024-07-26 16:41:14.685323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.058 [2024-07-26 16:41:14.694552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.058 [2024-07-26 16:41:14.695067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-07-26 16:41:14.695109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.058 [2024-07-26 16:41:14.695135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.058 [2024-07-26 16:41:14.695423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.058 [2024-07-26 16:41:14.695711] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.058 [2024-07-26 16:41:14.695743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.058 [2024-07-26 16:41:14.695765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.058 [2024-07-26 16:41:14.699901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.058 [2024-07-26 16:41:14.709124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.059 [2024-07-26 16:41:14.709639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-07-26 16:41:14.709680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.059 [2024-07-26 16:41:14.709705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.059 [2024-07-26 16:41:14.709993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.059 [2024-07-26 16:41:14.710292] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.059 [2024-07-26 16:41:14.710324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.059 [2024-07-26 16:41:14.710347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.059 [2024-07-26 16:41:14.714507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.059 [2024-07-26 16:41:14.723733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.059 [2024-07-26 16:41:14.724212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-07-26 16:41:14.724253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.059 [2024-07-26 16:41:14.724280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.059 [2024-07-26 16:41:14.724569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.059 [2024-07-26 16:41:14.724861] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.059 [2024-07-26 16:41:14.724893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.059 [2024-07-26 16:41:14.724915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.059 [2024-07-26 16:41:14.729080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.059 [2024-07-26 16:41:14.738355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.059 [2024-07-26 16:41:14.739032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-07-26 16:41:14.739098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.059 [2024-07-26 16:41:14.739124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.059 [2024-07-26 16:41:14.739414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.059 [2024-07-26 16:41:14.739705] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.059 [2024-07-26 16:41:14.739737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.059 [2024-07-26 16:41:14.739759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.059 [2024-07-26 16:41:14.743913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.059 [2024-07-26 16:41:14.752937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.059 [2024-07-26 16:41:14.753425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-07-26 16:41:14.753467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.059 [2024-07-26 16:41:14.753500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.059 [2024-07-26 16:41:14.753791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.059 [2024-07-26 16:41:14.754096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.059 [2024-07-26 16:41:14.754129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.059 [2024-07-26 16:41:14.754152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.059 [2024-07-26 16:41:14.758315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.059 [2024-07-26 16:41:14.767571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.059 [2024-07-26 16:41:14.768083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-07-26 16:41:14.768125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.059 [2024-07-26 16:41:14.768152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.059 [2024-07-26 16:41:14.768440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.059 [2024-07-26 16:41:14.768730] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.059 [2024-07-26 16:41:14.768762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.059 [2024-07-26 16:41:14.768785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.059 [2024-07-26 16:41:14.772928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.059 [2024-07-26 16:41:14.782114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.059 [2024-07-26 16:41:14.782693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-07-26 16:41:14.782751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.059 [2024-07-26 16:41:14.782778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.059 [2024-07-26 16:41:14.783075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.059 [2024-07-26 16:41:14.783367] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.059 [2024-07-26 16:41:14.783399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.059 [2024-07-26 16:41:14.783421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.059 [2024-07-26 16:41:14.787561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.059 [2024-07-26 16:41:14.796590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.059 [2024-07-26 16:41:14.797100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-07-26 16:41:14.797142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.059 [2024-07-26 16:41:14.797168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.059 [2024-07-26 16:41:14.797456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.059 [2024-07-26 16:41:14.797747] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.059 [2024-07-26 16:41:14.797785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.059 [2024-07-26 16:41:14.797808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.059 [2024-07-26 16:41:14.801950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.059 [2024-07-26 16:41:14.811186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.059 [2024-07-26 16:41:14.811777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-07-26 16:41:14.811818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.059 [2024-07-26 16:41:14.811844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.059 [2024-07-26 16:41:14.812143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.059 [2024-07-26 16:41:14.812434] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.059 [2024-07-26 16:41:14.812467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.059 [2024-07-26 16:41:14.812489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.059 [2024-07-26 16:41:14.816620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.318 [2024-07-26 16:41:14.825627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.318 [2024-07-26 16:41:14.826127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-07-26 16:41:14.826169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.318 [2024-07-26 16:41:14.826195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.318 [2024-07-26 16:41:14.826483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.318 [2024-07-26 16:41:14.826773] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.319 [2024-07-26 16:41:14.826804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.319 [2024-07-26 16:41:14.826827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.319 [2024-07-26 16:41:14.830983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.319 [2024-07-26 16:41:14.840233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.319 [2024-07-26 16:41:14.840750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-07-26 16:41:14.840791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.319 [2024-07-26 16:41:14.840834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.319 [2024-07-26 16:41:14.841136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.319 [2024-07-26 16:41:14.841427] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.319 [2024-07-26 16:41:14.841459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.319 [2024-07-26 16:41:14.841482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.319 [2024-07-26 16:41:14.845615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.319 [2024-07-26 16:41:14.854872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.319 [2024-07-26 16:41:14.855391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-07-26 16:41:14.855432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.319 [2024-07-26 16:41:14.855458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.319 [2024-07-26 16:41:14.855745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.319 [2024-07-26 16:41:14.856035] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.319 [2024-07-26 16:41:14.856078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.319 [2024-07-26 16:41:14.856102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.319 [2024-07-26 16:41:14.860253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.319 [2024-07-26 16:41:14.869505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.319 [2024-07-26 16:41:14.870134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-07-26 16:41:14.870177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.319 [2024-07-26 16:41:14.870204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.319 [2024-07-26 16:41:14.870491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.319 [2024-07-26 16:41:14.870780] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.319 [2024-07-26 16:41:14.870812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.319 [2024-07-26 16:41:14.870834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.319 [2024-07-26 16:41:14.874976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.319 [2024-07-26 16:41:14.883997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.319 [2024-07-26 16:41:14.884498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-07-26 16:41:14.884540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.319 [2024-07-26 16:41:14.884567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.319 [2024-07-26 16:41:14.884857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.319 [2024-07-26 16:41:14.885191] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.319 [2024-07-26 16:41:14.885232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.319 [2024-07-26 16:41:14.885266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.319 [2024-07-26 16:41:14.889448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.319 [2024-07-26 16:41:14.898483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.319 [2024-07-26 16:41:14.898998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-07-26 16:41:14.899040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.319 [2024-07-26 16:41:14.899085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.319 [2024-07-26 16:41:14.899378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.319 [2024-07-26 16:41:14.899670] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.319 [2024-07-26 16:41:14.899701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.319 [2024-07-26 16:41:14.899724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.319 [2024-07-26 16:41:14.903880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.319 [2024-07-26 16:41:14.913143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.319 [2024-07-26 16:41:14.913629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-07-26 16:41:14.913670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.319 [2024-07-26 16:41:14.913696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.319 [2024-07-26 16:41:14.913985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.319 [2024-07-26 16:41:14.914291] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.319 [2024-07-26 16:41:14.914323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.319 [2024-07-26 16:41:14.914346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.319 [2024-07-26 16:41:14.918492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.319 [2024-07-26 16:41:14.927735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.319 [2024-07-26 16:41:14.928275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-07-26 16:41:14.928317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.319 [2024-07-26 16:41:14.928344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.319 [2024-07-26 16:41:14.928633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.319 [2024-07-26 16:41:14.928924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.319 [2024-07-26 16:41:14.928956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.319 [2024-07-26 16:41:14.928978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.319 [2024-07-26 16:41:14.933129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.319 [2024-07-26 16:41:14.942367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.319 [2024-07-26 16:41:14.942845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-07-26 16:41:14.942885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.319 [2024-07-26 16:41:14.942912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.319 [2024-07-26 16:41:14.943213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.319 [2024-07-26 16:41:14.943509] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.319 [2024-07-26 16:41:14.943542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.319 [2024-07-26 16:41:14.943564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.319 [2024-07-26 16:41:14.947718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.319 [2024-07-26 16:41:14.956954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.319 [2024-07-26 16:41:14.957476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-07-26 16:41:14.957517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.319 [2024-07-26 16:41:14.957542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.319 [2024-07-26 16:41:14.957830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.319 [2024-07-26 16:41:14.958134] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.319 [2024-07-26 16:41:14.958166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.319 [2024-07-26 16:41:14.958189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.319 [2024-07-26 16:41:14.962327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.319 [2024-07-26 16:41:14.971541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.319 [2024-07-26 16:41:14.972155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-07-26 16:41:14.972197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.319 [2024-07-26 16:41:14.972223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.319 [2024-07-26 16:41:14.972512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.319 [2024-07-26 16:41:14.972802] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.320 [2024-07-26 16:41:14.972833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.320 [2024-07-26 16:41:14.972856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.320 [2024-07-26 16:41:14.976984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.320 [2024-07-26 16:41:14.986043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.320 [2024-07-26 16:41:14.986555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-07-26 16:41:14.986597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.320 [2024-07-26 16:41:14.986623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.320 [2024-07-26 16:41:14.986910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.320 [2024-07-26 16:41:14.987215] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.320 [2024-07-26 16:41:14.987248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.320 [2024-07-26 16:41:14.987270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.320 [2024-07-26 16:41:14.991400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.320 [2024-07-26 16:41:15.000644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.320 [2024-07-26 16:41:15.001167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-07-26 16:41:15.001210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.320 [2024-07-26 16:41:15.001236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.320 [2024-07-26 16:41:15.001524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.320 [2024-07-26 16:41:15.001814] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.320 [2024-07-26 16:41:15.001846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.320 [2024-07-26 16:41:15.001869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.320 [2024-07-26 16:41:15.005995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.320 [2024-07-26 16:41:15.015209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.320 [2024-07-26 16:41:15.015715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-07-26 16:41:15.015756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.320 [2024-07-26 16:41:15.015783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.320 [2024-07-26 16:41:15.016082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.320 [2024-07-26 16:41:15.016372] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.320 [2024-07-26 16:41:15.016404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.320 [2024-07-26 16:41:15.016427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.320 [2024-07-26 16:41:15.020559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.320 [2024-07-26 16:41:15.029803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.320 [2024-07-26 16:41:15.030317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-07-26 16:41:15.030359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.320 [2024-07-26 16:41:15.030385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.320 [2024-07-26 16:41:15.030672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.320 [2024-07-26 16:41:15.030961] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.320 [2024-07-26 16:41:15.030993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.320 [2024-07-26 16:41:15.031015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.320 [2024-07-26 16:41:15.035156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.320 [2024-07-26 16:41:15.044361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.320 [2024-07-26 16:41:15.044873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-07-26 16:41:15.044914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.320 [2024-07-26 16:41:15.044946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.320 [2024-07-26 16:41:15.045246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.320 [2024-07-26 16:41:15.045547] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.320 [2024-07-26 16:41:15.045579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.320 [2024-07-26 16:41:15.045602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.320 [2024-07-26 16:41:15.049728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.320 [2024-07-26 16:41:15.058947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.320 [2024-07-26 16:41:15.059467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-07-26 16:41:15.059508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.320 [2024-07-26 16:41:15.059534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.320 [2024-07-26 16:41:15.059822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.320 [2024-07-26 16:41:15.060123] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.320 [2024-07-26 16:41:15.060155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.320 [2024-07-26 16:41:15.060178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.320 [2024-07-26 16:41:15.064303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.320 [2024-07-26 16:41:15.073499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.320 [2024-07-26 16:41:15.073987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-07-26 16:41:15.074029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.320 [2024-07-26 16:41:15.074055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.320 [2024-07-26 16:41:15.074356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.320 [2024-07-26 16:41:15.074646] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.320 [2024-07-26 16:41:15.074677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.320 [2024-07-26 16:41:15.074700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.320 [2024-07-26 16:41:15.078839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.579 [2024-07-26 16:41:15.088053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.579 [2024-07-26 16:41:15.088567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.579 [2024-07-26 16:41:15.088609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.579 [2024-07-26 16:41:15.088636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.579 [2024-07-26 16:41:15.088923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.579 [2024-07-26 16:41:15.089232] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.579 [2024-07-26 16:41:15.089265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.580 [2024-07-26 16:41:15.089287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.580 [2024-07-26 16:41:15.093438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.580 [2024-07-26 16:41:15.102645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.580 [2024-07-26 16:41:15.103153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.580 [2024-07-26 16:41:15.103194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.580 [2024-07-26 16:41:15.103221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.580 [2024-07-26 16:41:15.103508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.580 [2024-07-26 16:41:15.103797] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.580 [2024-07-26 16:41:15.103829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.580 [2024-07-26 16:41:15.103851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.580 [2024-07-26 16:41:15.107980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.580 [2024-07-26 16:41:15.117180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.580 [2024-07-26 16:41:15.117702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.580 [2024-07-26 16:41:15.117743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.580 [2024-07-26 16:41:15.117769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.580 [2024-07-26 16:41:15.118056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.580 [2024-07-26 16:41:15.118355] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.580 [2024-07-26 16:41:15.118387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.580 [2024-07-26 16:41:15.118410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.580 [2024-07-26 16:41:15.122534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.580 [2024-07-26 16:41:15.131753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.580 [2024-07-26 16:41:15.132267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.580 [2024-07-26 16:41:15.132308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.580 [2024-07-26 16:41:15.132334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.580 [2024-07-26 16:41:15.132622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.580 [2024-07-26 16:41:15.132911] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.580 [2024-07-26 16:41:15.132943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.580 [2024-07-26 16:41:15.132966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.580 [2024-07-26 16:41:15.137110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.580 [2024-07-26 16:41:15.146301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.580 [2024-07-26 16:41:15.146914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.580 [2024-07-26 16:41:15.146971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.580 [2024-07-26 16:41:15.146997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.580 [2024-07-26 16:41:15.147357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.580 [2024-07-26 16:41:15.147646] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.580 [2024-07-26 16:41:15.147679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.580 [2024-07-26 16:41:15.147703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.580 [2024-07-26 16:41:15.151815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.580 [2024-07-26 16:41:15.160762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.580 [2024-07-26 16:41:15.161239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.580 [2024-07-26 16:41:15.161281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.580 [2024-07-26 16:41:15.161309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.580 [2024-07-26 16:41:15.161597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.580 [2024-07-26 16:41:15.161887] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.580 [2024-07-26 16:41:15.161919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.580 [2024-07-26 16:41:15.161942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.580 [2024-07-26 16:41:15.166082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.580 [2024-07-26 16:41:15.175273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.580 [2024-07-26 16:41:15.175867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.580 [2024-07-26 16:41:15.175935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.580 [2024-07-26 16:41:15.175963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.580 [2024-07-26 16:41:15.176261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.580 [2024-07-26 16:41:15.176551] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.580 [2024-07-26 16:41:15.176583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.580 [2024-07-26 16:41:15.176606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.580 [2024-07-26 16:41:15.180733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.580 [2024-07-26 16:41:15.189694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.580 [2024-07-26 16:41:15.190223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.580 [2024-07-26 16:41:15.190274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.580 [2024-07-26 16:41:15.190306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.580 [2024-07-26 16:41:15.190596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.580 [2024-07-26 16:41:15.190885] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.580 [2024-07-26 16:41:15.190917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.580 [2024-07-26 16:41:15.190940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.580 [2024-07-26 16:41:15.195097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.580 [2024-07-26 16:41:15.204345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.580 [2024-07-26 16:41:15.204956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.580 [2024-07-26 16:41:15.205022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.580 [2024-07-26 16:41:15.205048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.580 [2024-07-26 16:41:15.205355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.580 [2024-07-26 16:41:15.205655] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.580 [2024-07-26 16:41:15.205687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.580 [2024-07-26 16:41:15.205710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.580 [2024-07-26 16:41:15.209838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.580 [2024-07-26 16:41:15.218816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.580 [2024-07-26 16:41:15.219316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.580 [2024-07-26 16:41:15.219361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.580 [2024-07-26 16:41:15.219388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.580 [2024-07-26 16:41:15.219693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.580 [2024-07-26 16:41:15.219983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.580 [2024-07-26 16:41:15.220014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.580 [2024-07-26 16:41:15.220037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.580 [2024-07-26 16:41:15.224189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.580 [2024-07-26 16:41:15.233393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.580 [2024-07-26 16:41:15.233915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.580 [2024-07-26 16:41:15.233963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.580 [2024-07-26 16:41:15.233989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.580 [2024-07-26 16:41:15.234287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.581 [2024-07-26 16:41:15.234582] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.581 [2024-07-26 16:41:15.234614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.581 [2024-07-26 16:41:15.234638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.581 [2024-07-26 16:41:15.238752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.581 [2024-07-26 16:41:15.247965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.581 [2024-07-26 16:41:15.248468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.581 [2024-07-26 16:41:15.248520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.581 [2024-07-26 16:41:15.248547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.581 [2024-07-26 16:41:15.248836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.581 [2024-07-26 16:41:15.249137] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.581 [2024-07-26 16:41:15.249170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.581 [2024-07-26 16:41:15.249210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.581 [2024-07-26 16:41:15.253355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.581 [2024-07-26 16:41:15.262566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.581 [2024-07-26 16:41:15.263080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.581 [2024-07-26 16:41:15.263129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.581 [2024-07-26 16:41:15.263155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.581 [2024-07-26 16:41:15.263442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.581 [2024-07-26 16:41:15.263731] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.581 [2024-07-26 16:41:15.263763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.581 [2024-07-26 16:41:15.263785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.581 [2024-07-26 16:41:15.267915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.581 [2024-07-26 16:41:15.277332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.581 [2024-07-26 16:41:15.277860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.581 [2024-07-26 16:41:15.277912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.581 [2024-07-26 16:41:15.277939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.581 [2024-07-26 16:41:15.278240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.581 [2024-07-26 16:41:15.278530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.581 [2024-07-26 16:41:15.278562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.581 [2024-07-26 16:41:15.278584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.581 [2024-07-26 16:41:15.282711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.581 [2024-07-26 16:41:15.291923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.581 [2024-07-26 16:41:15.292431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.581 [2024-07-26 16:41:15.292481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.581 [2024-07-26 16:41:15.292508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.581 [2024-07-26 16:41:15.292795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.581 [2024-07-26 16:41:15.293095] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.581 [2024-07-26 16:41:15.293128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.581 [2024-07-26 16:41:15.293150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.581 [2024-07-26 16:41:15.297288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.581 [2024-07-26 16:41:15.306493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.581 [2024-07-26 16:41:15.307017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.581 [2024-07-26 16:41:15.307072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.581 [2024-07-26 16:41:15.307101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.581 [2024-07-26 16:41:15.307390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.581 [2024-07-26 16:41:15.307681] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.581 [2024-07-26 16:41:15.307712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.581 [2024-07-26 16:41:15.307735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.581 [2024-07-26 16:41:15.311862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.581 [2024-07-26 16:41:15.321056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.581 [2024-07-26 16:41:15.321546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.581 [2024-07-26 16:41:15.321597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.581 [2024-07-26 16:41:15.321623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.581 [2024-07-26 16:41:15.321912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.581 [2024-07-26 16:41:15.322214] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.581 [2024-07-26 16:41:15.322246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.581 [2024-07-26 16:41:15.322273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.581 [2024-07-26 16:41:15.326398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.581 [2024-07-26 16:41:15.335592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.581 [2024-07-26 16:41:15.336094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.581 [2024-07-26 16:41:15.336149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.581 [2024-07-26 16:41:15.336176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.581 [2024-07-26 16:41:15.336462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.581 [2024-07-26 16:41:15.336751] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.581 [2024-07-26 16:41:15.336782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.581 [2024-07-26 16:41:15.336805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.840 [2024-07-26 16:41:15.340932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.840 [2024-07-26 16:41:15.350145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.840 [2024-07-26 16:41:15.350665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.840 [2024-07-26 16:41:15.350716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.840 [2024-07-26 16:41:15.350742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.840 [2024-07-26 16:41:15.351029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.840 [2024-07-26 16:41:15.351331] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.840 [2024-07-26 16:41:15.351363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.840 [2024-07-26 16:41:15.351389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.840 [2024-07-26 16:41:15.355516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.840 [2024-07-26 16:41:15.364711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.840 [2024-07-26 16:41:15.365220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.841 [2024-07-26 16:41:15.365268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.841 [2024-07-26 16:41:15.365294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.841 [2024-07-26 16:41:15.365583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.841 [2024-07-26 16:41:15.365871] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.841 [2024-07-26 16:41:15.365902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.841 [2024-07-26 16:41:15.365924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.841 [2024-07-26 16:41:15.370067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.841 [2024-07-26 16:41:15.379265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.841 [2024-07-26 16:41:15.379774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.841 [2024-07-26 16:41:15.379823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.841 [2024-07-26 16:41:15.379849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.841 [2024-07-26 16:41:15.380148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.841 [2024-07-26 16:41:15.380447] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.841 [2024-07-26 16:41:15.380479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.841 [2024-07-26 16:41:15.380501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.841 [2024-07-26 16:41:15.384610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.841 [2024-07-26 16:41:15.393816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.841 [2024-07-26 16:41:15.394358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.841 [2024-07-26 16:41:15.394399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.841 [2024-07-26 16:41:15.394427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.841 [2024-07-26 16:41:15.394714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.841 [2024-07-26 16:41:15.395003] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.841 [2024-07-26 16:41:15.395035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.841 [2024-07-26 16:41:15.395072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.841 [2024-07-26 16:41:15.399212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.841 [2024-07-26 16:41:15.408401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.841 [2024-07-26 16:41:15.408884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.841 [2024-07-26 16:41:15.408933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.841 [2024-07-26 16:41:15.408959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.841 [2024-07-26 16:41:15.409258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.841 [2024-07-26 16:41:15.409548] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.841 [2024-07-26 16:41:15.409580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.841 [2024-07-26 16:41:15.409603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.841 [2024-07-26 16:41:15.413721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.841 [2024-07-26 16:41:15.422928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.841 [2024-07-26 16:41:15.423459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.841 [2024-07-26 16:41:15.423506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.841 [2024-07-26 16:41:15.423533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.841 [2024-07-26 16:41:15.423819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.841 [2024-07-26 16:41:15.424122] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.841 [2024-07-26 16:41:15.424163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.841 [2024-07-26 16:41:15.424186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.841 [2024-07-26 16:41:15.428327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.841 [2024-07-26 16:41:15.437514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.841 [2024-07-26 16:41:15.438014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.841 [2024-07-26 16:41:15.438074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.841 [2024-07-26 16:41:15.438103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.841 [2024-07-26 16:41:15.438391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.841 [2024-07-26 16:41:15.438680] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.841 [2024-07-26 16:41:15.438711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.841 [2024-07-26 16:41:15.438734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.841 [2024-07-26 16:41:15.442851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.841 [2024-07-26 16:41:15.452045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.841 [2024-07-26 16:41:15.452564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.841 [2024-07-26 16:41:15.452611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.841 [2024-07-26 16:41:15.452637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.841 [2024-07-26 16:41:15.452923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.841 [2024-07-26 16:41:15.453225] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.841 [2024-07-26 16:41:15.453257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.841 [2024-07-26 16:41:15.453283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.841 [2024-07-26 16:41:15.457404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.841 [2024-07-26 16:41:15.466628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.841 [2024-07-26 16:41:15.467148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.841 [2024-07-26 16:41:15.467198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.841 [2024-07-26 16:41:15.467223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.841 [2024-07-26 16:41:15.467510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.841 [2024-07-26 16:41:15.467800] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.841 [2024-07-26 16:41:15.467831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.841 [2024-07-26 16:41:15.467854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.841 [2024-07-26 16:41:15.471987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.841 [2024-07-26 16:41:15.481204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.841 [2024-07-26 16:41:15.481705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.841 [2024-07-26 16:41:15.481752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.841 [2024-07-26 16:41:15.481780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.841 [2024-07-26 16:41:15.482077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.841 [2024-07-26 16:41:15.482376] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.841 [2024-07-26 16:41:15.482409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.841 [2024-07-26 16:41:15.482432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.841 [2024-07-26 16:41:15.486569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.841 [2024-07-26 16:41:15.495818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.841 [2024-07-26 16:41:15.496329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.841 [2024-07-26 16:41:15.496371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.841 [2024-07-26 16:41:15.496397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.841 [2024-07-26 16:41:15.496684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.841 [2024-07-26 16:41:15.496972] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.841 [2024-07-26 16:41:15.497005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.841 [2024-07-26 16:41:15.497027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.841 [2024-07-26 16:41:15.501166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.841 [2024-07-26 16:41:15.510376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.841 [2024-07-26 16:41:15.510892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.842 [2024-07-26 16:41:15.510933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.842 [2024-07-26 16:41:15.510959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.842 [2024-07-26 16:41:15.511259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.842 [2024-07-26 16:41:15.511548] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.842 [2024-07-26 16:41:15.511581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.842 [2024-07-26 16:41:15.511604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.842 [2024-07-26 16:41:15.515742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.842 [2024-07-26 16:41:15.524955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.842 [2024-07-26 16:41:15.525480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.842 [2024-07-26 16:41:15.525523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.842 [2024-07-26 16:41:15.525549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.842 [2024-07-26 16:41:15.525836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.842 [2024-07-26 16:41:15.526147] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.842 [2024-07-26 16:41:15.526180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.842 [2024-07-26 16:41:15.526204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.842 [2024-07-26 16:41:15.530336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.842 [2024-07-26 16:41:15.539530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.842 [2024-07-26 16:41:15.540073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.842 [2024-07-26 16:41:15.540115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.842 [2024-07-26 16:41:15.540141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.842 [2024-07-26 16:41:15.540429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.842 [2024-07-26 16:41:15.540718] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.842 [2024-07-26 16:41:15.540751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.842 [2024-07-26 16:41:15.540774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.842 [2024-07-26 16:41:15.544906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.842 [2024-07-26 16:41:15.554121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.842 [2024-07-26 16:41:15.554624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.842 [2024-07-26 16:41:15.554665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.842 [2024-07-26 16:41:15.554692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.842 [2024-07-26 16:41:15.554979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.842 [2024-07-26 16:41:15.555280] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.842 [2024-07-26 16:41:15.555314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.842 [2024-07-26 16:41:15.555337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.842 [2024-07-26 16:41:15.559457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.842 [2024-07-26 16:41:15.568660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.842 [2024-07-26 16:41:15.569159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.842 [2024-07-26 16:41:15.569201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.842 [2024-07-26 16:41:15.569228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.842 [2024-07-26 16:41:15.569515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.842 [2024-07-26 16:41:15.569806] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.842 [2024-07-26 16:41:15.569838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.842 [2024-07-26 16:41:15.569867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.842 [2024-07-26 16:41:15.574000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.842 [2024-07-26 16:41:15.583214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.842 [2024-07-26 16:41:15.583737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.842 [2024-07-26 16:41:15.583779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.842 [2024-07-26 16:41:15.583805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.842 [2024-07-26 16:41:15.584109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.842 [2024-07-26 16:41:15.584398] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.842 [2024-07-26 16:41:15.584431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.842 [2024-07-26 16:41:15.584454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:55.842 [2024-07-26 16:41:15.588590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:55.842 [2024-07-26 16:41:15.597824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:55.842 [2024-07-26 16:41:15.598337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.842 [2024-07-26 16:41:15.598379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:55.842 [2024-07-26 16:41:15.598405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:55.842 [2024-07-26 16:41:15.598691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:55.842 [2024-07-26 16:41:15.598982] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:55.842 [2024-07-26 16:41:15.599013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:55.842 [2024-07-26 16:41:15.599036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.102 [2024-07-26 16:41:15.603181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.102 [2024-07-26 16:41:15.612424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.102 [2024-07-26 16:41:15.612932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.102 [2024-07-26 16:41:15.612974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.102 [2024-07-26 16:41:15.613001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.102 [2024-07-26 16:41:15.613300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.102 [2024-07-26 16:41:15.613588] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.102 [2024-07-26 16:41:15.613621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.102 [2024-07-26 16:41:15.613644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.102 [2024-07-26 16:41:15.617776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.102 [2024-07-26 16:41:15.627006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.102 [2024-07-26 16:41:15.627529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.102 [2024-07-26 16:41:15.627576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.102 [2024-07-26 16:41:15.627603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.102 [2024-07-26 16:41:15.627890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.102 [2024-07-26 16:41:15.628201] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.102 [2024-07-26 16:41:15.628233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.102 [2024-07-26 16:41:15.628256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.102 [2024-07-26 16:41:15.632401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.102 [2024-07-26 16:41:15.641627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.102 [2024-07-26 16:41:15.642192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.102 [2024-07-26 16:41:15.642235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.102 [2024-07-26 16:41:15.642261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.102 [2024-07-26 16:41:15.642549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.102 [2024-07-26 16:41:15.642839] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.102 [2024-07-26 16:41:15.642872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.102 [2024-07-26 16:41:15.642895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.102 [2024-07-26 16:41:15.647024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.102 [2024-07-26 16:41:15.656236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.102 [2024-07-26 16:41:15.656724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.102 [2024-07-26 16:41:15.656765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.102 [2024-07-26 16:41:15.656791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.102 [2024-07-26 16:41:15.657095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.102 [2024-07-26 16:41:15.657384] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.102 [2024-07-26 16:41:15.657417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.102 [2024-07-26 16:41:15.657440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.102 [2024-07-26 16:41:15.661583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.102 [2024-07-26 16:41:15.670793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.102 [2024-07-26 16:41:15.671305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.102 [2024-07-26 16:41:15.671359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.102 [2024-07-26 16:41:15.671385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.102 [2024-07-26 16:41:15.671680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.102 [2024-07-26 16:41:15.671971] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.102 [2024-07-26 16:41:15.672003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.102 [2024-07-26 16:41:15.672026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.102 [2024-07-26 16:41:15.676176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.102 [2024-07-26 16:41:15.685372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.102 [2024-07-26 16:41:15.685877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.102 [2024-07-26 16:41:15.685918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.102 [2024-07-26 16:41:15.685945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.102 [2024-07-26 16:41:15.686248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.102 [2024-07-26 16:41:15.686536] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.102 [2024-07-26 16:41:15.686567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.102 [2024-07-26 16:41:15.686589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.102 [2024-07-26 16:41:15.690714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.102 [2024-07-26 16:41:15.699930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.102 [2024-07-26 16:41:15.700461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.102 [2024-07-26 16:41:15.700503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.102 [2024-07-26 16:41:15.700529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.102 [2024-07-26 16:41:15.700816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.102 [2024-07-26 16:41:15.701118] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.102 [2024-07-26 16:41:15.701152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.102 [2024-07-26 16:41:15.701175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.102 [2024-07-26 16:41:15.705299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.102 [2024-07-26 16:41:15.714503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.102 [2024-07-26 16:41:15.715018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.102 [2024-07-26 16:41:15.715070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.103 [2024-07-26 16:41:15.715099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.103 [2024-07-26 16:41:15.715388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.103 [2024-07-26 16:41:15.715678] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.103 [2024-07-26 16:41:15.715711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.103 [2024-07-26 16:41:15.715741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.103 [2024-07-26 16:41:15.719883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 818196 Killed "${NVMF_APP[@]}" "$@" 00:35:56.103 16:41:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:56.103 16:41:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:56.103 16:41:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:56.103 16:41:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:56.103 16:41:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:56.103 16:41:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=819289 00:35:56.103 16:41:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:56.103 16:41:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 819289 00:35:56.103 16:41:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 819289 ']' 00:35:56.103 16:41:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:56.103 16:41:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:56.103 16:41:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:56.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:56.103 16:41:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:56.103 16:41:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:56.103 [2024-07-26 16:41:15.729146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.103 [2024-07-26 16:41:15.729659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.103 [2024-07-26 16:41:15.729700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.103 [2024-07-26 16:41:15.729726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.103 [2024-07-26 16:41:15.730013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.103 [2024-07-26 16:41:15.730315] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.103 [2024-07-26 16:41:15.730347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.103 [2024-07-26 16:41:15.730371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.103 [2024-07-26 16:41:15.734519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.103 [2024-07-26 16:41:15.743759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.103 [2024-07-26 16:41:15.744245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.103 [2024-07-26 16:41:15.744286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.103 [2024-07-26 16:41:15.744313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.103 [2024-07-26 16:41:15.744601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.103 [2024-07-26 16:41:15.744891] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.103 [2024-07-26 16:41:15.744923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.103 [2024-07-26 16:41:15.744951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.103 [2024-07-26 16:41:15.749123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.103 [2024-07-26 16:41:15.758457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.103 [2024-07-26 16:41:15.758959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.103 [2024-07-26 16:41:15.759002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.103 [2024-07-26 16:41:15.759029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.103 [2024-07-26 16:41:15.759332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.103 [2024-07-26 16:41:15.759627] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.103 [2024-07-26 16:41:15.759659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.103 [2024-07-26 16:41:15.759684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.103 [2024-07-26 16:41:15.763940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.103 [2024-07-26 16:41:15.773200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.103 [2024-07-26 16:41:15.773835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.103 [2024-07-26 16:41:15.773884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.103 [2024-07-26 16:41:15.773913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.103 [2024-07-26 16:41:15.774225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.103 [2024-07-26 16:41:15.774526] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.103 [2024-07-26 16:41:15.774559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.103 [2024-07-26 16:41:15.774585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.103 [2024-07-26 16:41:15.778865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.103 [2024-07-26 16:41:15.787890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.103 [2024-07-26 16:41:15.788417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.103 [2024-07-26 16:41:15.788459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.103 [2024-07-26 16:41:15.788486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.103 [2024-07-26 16:41:15.788781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.103 [2024-07-26 16:41:15.789090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.103 [2024-07-26 16:41:15.789123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.103 [2024-07-26 16:41:15.789145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.103 [2024-07-26 16:41:15.793403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.103 [2024-07-26 16:41:15.802635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.103 [2024-07-26 16:41:15.803151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.103 [2024-07-26 16:41:15.803193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.103 [2024-07-26 16:41:15.803220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.103 [2024-07-26 16:41:15.803514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.103 [2024-07-26 16:41:15.803808] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.103 [2024-07-26 16:41:15.803840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.103 [2024-07-26 16:41:15.803863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.103 [2024-07-26 16:41:15.808115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.103 [2024-07-26 16:41:15.817311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.103 [2024-07-26 16:41:15.817859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.103 [2024-07-26 16:41:15.817902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.103 [2024-07-26 16:41:15.817929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.103 [2024-07-26 16:41:15.818239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.103 [2024-07-26 16:41:15.818534] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.103 [2024-07-26 16:41:15.818566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.103 [2024-07-26 16:41:15.818588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.103 [2024-07-26 16:41:15.822821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.103 [2024-07-26 16:41:15.823200] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:56.103 [2024-07-26 16:41:15.823328] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:56.103 [2024-07-26 16:41:15.832045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.103 [2024-07-26 16:41:15.832545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.103 [2024-07-26 16:41:15.832586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.103 [2024-07-26 16:41:15.832612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.104 [2024-07-26 16:41:15.832907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.104 [2024-07-26 16:41:15.833216] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.104 [2024-07-26 16:41:15.833249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.104 [2024-07-26 16:41:15.833272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.104 [2024-07-26 16:41:15.837511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.104 [2024-07-26 16:41:15.846755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.104 [2024-07-26 16:41:15.847297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.104 [2024-07-26 16:41:15.847339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.104 [2024-07-26 16:41:15.847365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.104 [2024-07-26 16:41:15.847659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.104 [2024-07-26 16:41:15.847955] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.104 [2024-07-26 16:41:15.847988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.104 [2024-07-26 16:41:15.848011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.104 [2024-07-26 16:41:15.852316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.104 [2024-07-26 16:41:15.861336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.104 [2024-07-26 16:41:15.861881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.104 [2024-07-26 16:41:15.861925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.104 [2024-07-26 16:41:15.861953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.104 [2024-07-26 16:41:15.862264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.104 [2024-07-26 16:41:15.862562] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.104 [2024-07-26 16:41:15.862595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.104 [2024-07-26 16:41:15.862618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.363 [2024-07-26 16:41:15.866875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.363 [2024-07-26 16:41:15.876104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.363 [2024-07-26 16:41:15.876602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-26 16:41:15.876643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.363 [2024-07-26 16:41:15.876669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.363 [2024-07-26 16:41:15.876979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.363 [2024-07-26 16:41:15.877284] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.363 [2024-07-26 16:41:15.877316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.363 [2024-07-26 16:41:15.877338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.363 [2024-07-26 16:41:15.881570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.363 [2024-07-26 16:41:15.890753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.363 [2024-07-26 16:41:15.891285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-26 16:41:15.891325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.364 [2024-07-26 16:41:15.891352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.364 [2024-07-26 16:41:15.891654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.364 [2024-07-26 16:41:15.891949] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.364 [2024-07-26 16:41:15.891980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.364 [2024-07-26 16:41:15.892002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.364 [2024-07-26 16:41:15.896281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.364 [2024-07-26 16:41:15.905466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.364 [2024-07-26 16:41:15.905982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-26 16:41:15.906024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.364 [2024-07-26 16:41:15.906050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.364 [2024-07-26 16:41:15.906354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.364 [2024-07-26 16:41:15.906650] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.364 [2024-07-26 16:41:15.906682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.364 [2024-07-26 16:41:15.906703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.364 [2024-07-26 16:41:15.910943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.364 EAL: No free 2048 kB hugepages reported on node 1 00:35:56.364 [2024-07-26 16:41:15.920162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.364 [2024-07-26 16:41:15.920678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-26 16:41:15.920717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.364 [2024-07-26 16:41:15.920744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.364 [2024-07-26 16:41:15.921038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.364 [2024-07-26 16:41:15.921345] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.364 [2024-07-26 16:41:15.921377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.364 [2024-07-26 16:41:15.921401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.364 [2024-07-26 16:41:15.925634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.364 [2024-07-26 16:41:15.934837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.364 [2024-07-26 16:41:15.935356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-26 16:41:15.935397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.364 [2024-07-26 16:41:15.935423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.364 [2024-07-26 16:41:15.935715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.364 [2024-07-26 16:41:15.936008] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.364 [2024-07-26 16:41:15.936045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.364 [2024-07-26 16:41:15.936080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.364 [2024-07-26 16:41:15.940298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.364 [2024-07-26 16:41:15.949447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.364 [2024-07-26 16:41:15.949943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-26 16:41:15.949983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.364 [2024-07-26 16:41:15.950009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.364 [2024-07-26 16:41:15.950311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.364 [2024-07-26 16:41:15.950605] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.364 [2024-07-26 16:41:15.950636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.364 [2024-07-26 16:41:15.950658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.364 [2024-07-26 16:41:15.954858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.364 [2024-07-26 16:41:15.963984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.364 [2024-07-26 16:41:15.964511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-26 16:41:15.964552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.364 [2024-07-26 16:41:15.964578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.364 [2024-07-26 16:41:15.964870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.364 [2024-07-26 16:41:15.965177] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.364 [2024-07-26 16:41:15.965209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.364 [2024-07-26 16:41:15.965231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.364 [2024-07-26 16:41:15.969438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.364 [2024-07-26 16:41:15.978617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.364 [2024-07-26 16:41:15.979172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-26 16:41:15.979213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.364 [2024-07-26 16:41:15.979240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.364 [2024-07-26 16:41:15.979532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.364 [2024-07-26 16:41:15.979826] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.364 [2024-07-26 16:41:15.979858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.364 [2024-07-26 16:41:15.979881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.364 [2024-07-26 16:41:15.984109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.364 [2024-07-26 16:41:15.985903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:56.364 [2024-07-26 16:41:15.993282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.364 [2024-07-26 16:41:15.993811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-26 16:41:15.993853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.364 [2024-07-26 16:41:15.993880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.364 [2024-07-26 16:41:15.994189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.364 [2024-07-26 16:41:15.994491] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.364 [2024-07-26 16:41:15.994524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.364 [2024-07-26 16:41:15.994547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.364 [2024-07-26 16:41:15.998845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.364 [2024-07-26 16:41:16.007826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.364 [2024-07-26 16:41:16.008503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-26 16:41:16.008553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.364 [2024-07-26 16:41:16.008583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.364 [2024-07-26 16:41:16.008882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.364 [2024-07-26 16:41:16.009196] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.364 [2024-07-26 16:41:16.009229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.364 [2024-07-26 16:41:16.009256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.364 [2024-07-26 16:41:16.013493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.364 [2024-07-26 16:41:16.022487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.364 [2024-07-26 16:41:16.023015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-26 16:41:16.023056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.364 [2024-07-26 16:41:16.023093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.364 [2024-07-26 16:41:16.023397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.364 [2024-07-26 16:41:16.023692] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.364 [2024-07-26 16:41:16.023723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.364 [2024-07-26 16:41:16.023745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.364 [2024-07-26 16:41:16.028010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.365 [2024-07-26 16:41:16.037363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.365 [2024-07-26 16:41:16.037866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-26 16:41:16.037907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.365 [2024-07-26 16:41:16.037940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.365 [2024-07-26 16:41:16.038250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.365 [2024-07-26 16:41:16.038548] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.365 [2024-07-26 16:41:16.038580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.365 [2024-07-26 16:41:16.038603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.365 [2024-07-26 16:41:16.042905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.365 [2024-07-26 16:41:16.051943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.365 [2024-07-26 16:41:16.052488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-26 16:41:16.052528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.365 [2024-07-26 16:41:16.052555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.365 [2024-07-26 16:41:16.052846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.365 [2024-07-26 16:41:16.053153] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.365 [2024-07-26 16:41:16.053185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.365 [2024-07-26 16:41:16.053208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.365 [2024-07-26 16:41:16.057440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.365 [2024-07-26 16:41:16.066644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.365 [2024-07-26 16:41:16.067164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-26 16:41:16.067204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.365 [2024-07-26 16:41:16.067231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.365 [2024-07-26 16:41:16.067523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.365 [2024-07-26 16:41:16.067816] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.365 [2024-07-26 16:41:16.067847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.365 [2024-07-26 16:41:16.067869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.365 [2024-07-26 16:41:16.072079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.365 [2024-07-26 16:41:16.081194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.365 [2024-07-26 16:41:16.081715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-26 16:41:16.081755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.365 [2024-07-26 16:41:16.081781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.365 [2024-07-26 16:41:16.082083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.365 [2024-07-26 16:41:16.082387] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.365 [2024-07-26 16:41:16.082433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.365 [2024-07-26 16:41:16.082456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.365 [2024-07-26 16:41:16.086683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.365 [2024-07-26 16:41:16.095858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.365 [2024-07-26 16:41:16.096347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-26 16:41:16.096388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.365 [2024-07-26 16:41:16.096414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.365 [2024-07-26 16:41:16.096707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.365 [2024-07-26 16:41:16.097046] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.365 [2024-07-26 16:41:16.097088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.365 [2024-07-26 16:41:16.097112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.365 [2024-07-26 16:41:16.101355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.365 [2024-07-26 16:41:16.110670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.365 [2024-07-26 16:41:16.111196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-26 16:41:16.111236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.365 [2024-07-26 16:41:16.111262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.365 [2024-07-26 16:41:16.111554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.365 [2024-07-26 16:41:16.111851] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.365 [2024-07-26 16:41:16.111884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.365 [2024-07-26 16:41:16.111906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.365 [2024-07-26 16:41:16.116202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.625 [2024-07-26 16:41:16.125289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.625 [2024-07-26 16:41:16.126000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.625 [2024-07-26 16:41:16.126049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.625 [2024-07-26 16:41:16.126089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.625 [2024-07-26 16:41:16.126396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.625 [2024-07-26 16:41:16.126701] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.625 [2024-07-26 16:41:16.126733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.625 [2024-07-26 16:41:16.126761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.625 [2024-07-26 16:41:16.131071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.625 [2024-07-26 16:41:16.140055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.625 [2024-07-26 16:41:16.140632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.625 [2024-07-26 16:41:16.140677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.625 [2024-07-26 16:41:16.140706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.625 [2024-07-26 16:41:16.141005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.625 [2024-07-26 16:41:16.141317] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.625 [2024-07-26 16:41:16.141350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.625 [2024-07-26 16:41:16.141374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.625 [2024-07-26 16:41:16.145631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.625 [2024-07-26 16:41:16.154628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.625 [2024-07-26 16:41:16.155130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.625 [2024-07-26 16:41:16.155171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.625 [2024-07-26 16:41:16.155197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.625 [2024-07-26 16:41:16.155491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.625 [2024-07-26 16:41:16.155786] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.625 [2024-07-26 16:41:16.155817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.625 [2024-07-26 16:41:16.155839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.625 [2024-07-26 16:41:16.160138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.625 [2024-07-26 16:41:16.169245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.625 [2024-07-26 16:41:16.169819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.625 [2024-07-26 16:41:16.169860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.625 [2024-07-26 16:41:16.169887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.625 [2024-07-26 16:41:16.170199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.625 [2024-07-26 16:41:16.170496] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.625 [2024-07-26 16:41:16.170528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.625 [2024-07-26 16:41:16.170551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.625 [2024-07-26 16:41:16.174860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.625 [2024-07-26 16:41:16.183889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.625 [2024-07-26 16:41:16.184395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.625 [2024-07-26 16:41:16.184436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.625 [2024-07-26 16:41:16.184468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.625 [2024-07-26 16:41:16.184763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.625 [2024-07-26 16:41:16.185072] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.625 [2024-07-26 16:41:16.185103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.625 [2024-07-26 16:41:16.185126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.625 [2024-07-26 16:41:16.189413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.625 [2024-07-26 16:41:16.198458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.625 [2024-07-26 16:41:16.198949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.625 [2024-07-26 16:41:16.198990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.625 [2024-07-26 16:41:16.199017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.625 [2024-07-26 16:41:16.199322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.625 [2024-07-26 16:41:16.199617] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.625 [2024-07-26 16:41:16.199648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.625 [2024-07-26 16:41:16.199670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.625 [2024-07-26 16:41:16.203866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.625 [2024-07-26 16:41:16.212999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.625 [2024-07-26 16:41:16.213510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.625 [2024-07-26 16:41:16.213551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.625 [2024-07-26 16:41:16.213577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.625 [2024-07-26 16:41:16.213868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.625 [2024-07-26 16:41:16.214173] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.625 [2024-07-26 16:41:16.214205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.625 [2024-07-26 16:41:16.214227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.625 [2024-07-26 16:41:16.218427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.625 [2024-07-26 16:41:16.227561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.625 [2024-07-26 16:41:16.228114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.625 [2024-07-26 16:41:16.228156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.625 [2024-07-26 16:41:16.228182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.625 [2024-07-26 16:41:16.228475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.625 [2024-07-26 16:41:16.228779] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.625 [2024-07-26 16:41:16.228811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.625 [2024-07-26 16:41:16.228833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.625 [2024-07-26 16:41:16.233080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.625 [2024-07-26 16:41:16.242205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.625 [2024-07-26 16:41:16.242735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.626 [2024-07-26 16:41:16.242777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.626 [2024-07-26 16:41:16.242803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.626 [2024-07-26 16:41:16.243113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.626 [2024-07-26 16:41:16.243409] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.626 [2024-07-26 16:41:16.243441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.626 [2024-07-26 16:41:16.243464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.626 [2024-07-26 16:41:16.247749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.626 [2024-07-26 16:41:16.251949] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:56.626 [2024-07-26 16:41:16.251995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:56.626 [2024-07-26 16:41:16.252028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:56.626 [2024-07-26 16:41:16.252048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:56.626 [2024-07-26 16:41:16.252081] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:56.626 [2024-07-26 16:41:16.252282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:56.626 [2024-07-26 16:41:16.252307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:56.626 [2024-07-26 16:41:16.252319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:56.626 [2024-07-26 16:41:16.256821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.626 [2024-07-26 16:41:16.257437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.626 [2024-07-26 16:41:16.257481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.626 [2024-07-26 16:41:16.257509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.626 [2024-07-26 16:41:16.257813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.626 [2024-07-26 16:41:16.258128] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.626 [2024-07-26 16:41:16.258161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.626 [2024-07-26 16:41:16.258186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.626 [2024-07-26 16:41:16.262458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.626 [2024-07-26 16:41:16.271477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.626 [2024-07-26 16:41:16.272252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.626 [2024-07-26 16:41:16.272318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.626 [2024-07-26 16:41:16.272352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.626 [2024-07-26 16:41:16.272665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.626 [2024-07-26 16:41:16.272972] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.626 [2024-07-26 16:41:16.273006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.626 [2024-07-26 16:41:16.273035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.626 [2024-07-26 16:41:16.277283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.626 [2024-07-26 16:41:16.286288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.626 [2024-07-26 16:41:16.286823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.626 [2024-07-26 16:41:16.286865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.626 [2024-07-26 16:41:16.286892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.626 [2024-07-26 16:41:16.287199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.626 [2024-07-26 16:41:16.287496] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.626 [2024-07-26 16:41:16.287527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.626 [2024-07-26 16:41:16.287550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.626 [2024-07-26 16:41:16.291809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.626 [2024-07-26 16:41:16.301139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.626 [2024-07-26 16:41:16.301673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.626 [2024-07-26 16:41:16.301715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.626 [2024-07-26 16:41:16.301741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.626 [2024-07-26 16:41:16.302034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.626 [2024-07-26 16:41:16.302339] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.626 [2024-07-26 16:41:16.302372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.626 [2024-07-26 16:41:16.302395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.626 [2024-07-26 16:41:16.306647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.626 [2024-07-26 16:41:16.315854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.626 [2024-07-26 16:41:16.316340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.626 [2024-07-26 16:41:16.316383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.626 [2024-07-26 16:41:16.316409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.626 [2024-07-26 16:41:16.316703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.626 [2024-07-26 16:41:16.317004] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.626 [2024-07-26 16:41:16.317035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.626 [2024-07-26 16:41:16.317072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.626 [2024-07-26 16:41:16.321317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.626 [2024-07-26 16:41:16.330638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.626 [2024-07-26 16:41:16.331142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.626 [2024-07-26 16:41:16.331209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.626 [2024-07-26 16:41:16.331236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.626 [2024-07-26 16:41:16.331538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.626 [2024-07-26 16:41:16.331832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.626 [2024-07-26 16:41:16.331863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.626 [2024-07-26 16:41:16.331886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.626 [2024-07-26 16:41:16.336168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.626 [2024-07-26 16:41:16.345449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.626 [2024-07-26 16:41:16.346263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.626 [2024-07-26 16:41:16.346318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.626 [2024-07-26 16:41:16.346359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.626 [2024-07-26 16:41:16.346672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.626 [2024-07-26 16:41:16.346979] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.626 [2024-07-26 16:41:16.347012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.626 [2024-07-26 16:41:16.347041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.626 [2024-07-26 16:41:16.351370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.626 [2024-07-26 16:41:16.360164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.626 [2024-07-26 16:41:16.360931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.626 [2024-07-26 16:41:16.360990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.626 [2024-07-26 16:41:16.361023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.626 [2024-07-26 16:41:16.361354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.626 [2024-07-26 16:41:16.361670] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.626 [2024-07-26 16:41:16.361704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.626 [2024-07-26 16:41:16.361741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.626 [2024-07-26 16:41:16.366105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.626 [2024-07-26 16:41:16.374927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.626 [2024-07-26 16:41:16.375567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.626 [2024-07-26 16:41:16.375613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.626 [2024-07-26 16:41:16.375644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.627 [2024-07-26 16:41:16.375944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.627 [2024-07-26 16:41:16.376257] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.627 [2024-07-26 16:41:16.376291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.627 [2024-07-26 16:41:16.376316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.627 [2024-07-26 16:41:16.380579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.886 [2024-07-26 16:41:16.389613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.886 [2024-07-26 16:41:16.390101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.886 [2024-07-26 16:41:16.390143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.886 [2024-07-26 16:41:16.390170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.886 [2024-07-26 16:41:16.390465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.886 [2024-07-26 16:41:16.390759] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.886 [2024-07-26 16:41:16.390791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.886 [2024-07-26 16:41:16.390813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.886 [2024-07-26 16:41:16.395069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.886 [2024-07-26 16:41:16.404337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.886 [2024-07-26 16:41:16.404845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.886 [2024-07-26 16:41:16.404886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.886 [2024-07-26 16:41:16.404912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.886 [2024-07-26 16:41:16.405217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.886 [2024-07-26 16:41:16.405513] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.886 [2024-07-26 16:41:16.405545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.886 [2024-07-26 16:41:16.405567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.887 [2024-07-26 16:41:16.409818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.887 [2024-07-26 16:41:16.419056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.887 [2024-07-26 16:41:16.419561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.887 [2024-07-26 16:41:16.419602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.887 [2024-07-26 16:41:16.419628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.887 [2024-07-26 16:41:16.419921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.887 [2024-07-26 16:41:16.420234] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.887 [2024-07-26 16:41:16.420266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.887 [2024-07-26 16:41:16.420288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.887 [2024-07-26 16:41:16.424579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.887 [2024-07-26 16:41:16.433838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.887 [2024-07-26 16:41:16.434386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.887 [2024-07-26 16:41:16.434428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.887 [2024-07-26 16:41:16.434454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.887 [2024-07-26 16:41:16.434744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.887 [2024-07-26 16:41:16.435040] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.887 [2024-07-26 16:41:16.435082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.887 [2024-07-26 16:41:16.435106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.887 [2024-07-26 16:41:16.439368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.887 [2024-07-26 16:41:16.448580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.887 [2024-07-26 16:41:16.449111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.887 [2024-07-26 16:41:16.449152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.887 [2024-07-26 16:41:16.449180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.887 [2024-07-26 16:41:16.449474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.887 [2024-07-26 16:41:16.449767] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.887 [2024-07-26 16:41:16.449799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.887 [2024-07-26 16:41:16.449821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.887 [2024-07-26 16:41:16.454084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.887 [2024-07-26 16:41:16.463215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.887 [2024-07-26 16:41:16.463718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.887 [2024-07-26 16:41:16.463758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.887 [2024-07-26 16:41:16.463784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.887 [2024-07-26 16:41:16.464090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.887 [2024-07-26 16:41:16.464384] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.887 [2024-07-26 16:41:16.464416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.887 [2024-07-26 16:41:16.464438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.887 [2024-07-26 16:41:16.468605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.887 [2024-07-26 16:41:16.477925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.887 [2024-07-26 16:41:16.478433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.887 [2024-07-26 16:41:16.478474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.887 [2024-07-26 16:41:16.478500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.887 [2024-07-26 16:41:16.478789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.887 [2024-07-26 16:41:16.479101] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.887 [2024-07-26 16:41:16.479133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.887 [2024-07-26 16:41:16.479155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.887 [2024-07-26 16:41:16.483337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.887 [2024-07-26 16:41:16.492405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.887 [2024-07-26 16:41:16.492902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.887 [2024-07-26 16:41:16.492944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.887 [2024-07-26 16:41:16.492969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.887 [2024-07-26 16:41:16.493280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.887 [2024-07-26 16:41:16.493584] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.887 [2024-07-26 16:41:16.493616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.887 [2024-07-26 16:41:16.493638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.887 [2024-07-26 16:41:16.497893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.887 [2024-07-26 16:41:16.506990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.887 [2024-07-26 16:41:16.507775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.887 [2024-07-26 16:41:16.507830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.887 [2024-07-26 16:41:16.507861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.887 [2024-07-26 16:41:16.508180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.887 [2024-07-26 16:41:16.508487] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.887 [2024-07-26 16:41:16.508520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.887 [2024-07-26 16:41:16.508557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.887 [2024-07-26 16:41:16.512859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.887 [2024-07-26 16:41:16.521655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.887 [2024-07-26 16:41:16.522299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.887 [2024-07-26 16:41:16.522347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.887 [2024-07-26 16:41:16.522377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.887 [2024-07-26 16:41:16.522676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.887 [2024-07-26 16:41:16.522974] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.887 [2024-07-26 16:41:16.523007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.887 [2024-07-26 16:41:16.523031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.887 [2024-07-26 16:41:16.527305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.887 [2024-07-26 16:41:16.536286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.887 [2024-07-26 16:41:16.536795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.887 [2024-07-26 16:41:16.536836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.887 [2024-07-26 16:41:16.536863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.887 [2024-07-26 16:41:16.537168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.887 [2024-07-26 16:41:16.537463] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.887 [2024-07-26 16:41:16.537494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.887 [2024-07-26 16:41:16.537517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.887 [2024-07-26 16:41:16.541833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.887 [2024-07-26 16:41:16.550800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.887 [2024-07-26 16:41:16.551311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.887 [2024-07-26 16:41:16.551352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.887 [2024-07-26 16:41:16.551378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.887 [2024-07-26 16:41:16.551668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.887 [2024-07-26 16:41:16.551962] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.888 [2024-07-26 16:41:16.551994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.888 [2024-07-26 16:41:16.552016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.888 [2024-07-26 16:41:16.556239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.888 [2024-07-26 16:41:16.565380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.888 [2024-07-26 16:41:16.565930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.888 [2024-07-26 16:41:16.565971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.888 [2024-07-26 16:41:16.565996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.888 [2024-07-26 16:41:16.566300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.888 [2024-07-26 16:41:16.566595] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.888 [2024-07-26 16:41:16.566626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.888 [2024-07-26 16:41:16.566649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.888 [2024-07-26 16:41:16.570869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.888 [2024-07-26 16:41:16.580053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.888 [2024-07-26 16:41:16.580586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.888 [2024-07-26 16:41:16.580626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.888 [2024-07-26 16:41:16.580651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.888 [2024-07-26 16:41:16.580941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.888 [2024-07-26 16:41:16.581244] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.888 [2024-07-26 16:41:16.581275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.888 [2024-07-26 16:41:16.581297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.888 [2024-07-26 16:41:16.585516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.888 [2024-07-26 16:41:16.594607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.888 [2024-07-26 16:41:16.595155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.888 [2024-07-26 16:41:16.595196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.888 [2024-07-26 16:41:16.595222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.888 [2024-07-26 16:41:16.595513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.888 [2024-07-26 16:41:16.595806] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.888 [2024-07-26 16:41:16.595837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.888 [2024-07-26 16:41:16.595860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.888 [2024-07-26 16:41:16.600098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.888 [2024-07-26 16:41:16.609233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.888 [2024-07-26 16:41:16.609790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.888 [2024-07-26 16:41:16.609834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.888 [2024-07-26 16:41:16.609861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.888 [2024-07-26 16:41:16.610173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.888 [2024-07-26 16:41:16.610470] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.888 [2024-07-26 16:41:16.610502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.888 [2024-07-26 16:41:16.610525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.888 [2024-07-26 16:41:16.614767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.888 [2024-07-26 16:41:16.623964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.888 [2024-07-26 16:41:16.624490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.888 [2024-07-26 16:41:16.624533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.888 [2024-07-26 16:41:16.624559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.888 [2024-07-26 16:41:16.624851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.888 [2024-07-26 16:41:16.625159] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.888 [2024-07-26 16:41:16.625191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.888 [2024-07-26 16:41:16.625214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.888 [2024-07-26 16:41:16.629471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:56.888 [2024-07-26 16:41:16.638742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:56.888 [2024-07-26 16:41:16.639253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.888 [2024-07-26 16:41:16.639294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:56.888 [2024-07-26 16:41:16.639320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:56.888 [2024-07-26 16:41:16.639615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:56.888 [2024-07-26 16:41:16.639909] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.888 [2024-07-26 16:41:16.639940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.888 [2024-07-26 16:41:16.639964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.888 [2024-07-26 16:41:16.644235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.148 [2024-07-26 16:41:16.653444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.148 [2024-07-26 16:41:16.653962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.148 [2024-07-26 16:41:16.654003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.148 [2024-07-26 16:41:16.654030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.148 [2024-07-26 16:41:16.654330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.148 [2024-07-26 16:41:16.654625] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.148 [2024-07-26 16:41:16.654657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.148 [2024-07-26 16:41:16.654686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.148 [2024-07-26 16:41:16.658907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.148 [2024-07-26 16:41:16.668053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.148 [2024-07-26 16:41:16.668544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.148 [2024-07-26 16:41:16.668584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.148 [2024-07-26 16:41:16.668610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.148 [2024-07-26 16:41:16.668900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.148 [2024-07-26 16:41:16.669207] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.148 [2024-07-26 16:41:16.669239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.148 [2024-07-26 16:41:16.669262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.148 [2024-07-26 16:41:16.673469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.148 [2024-07-26 16:41:16.682544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.148 [2024-07-26 16:41:16.683076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.148 [2024-07-26 16:41:16.683121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.148 [2024-07-26 16:41:16.683147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.148 [2024-07-26 16:41:16.683437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.148 [2024-07-26 16:41:16.683730] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.148 [2024-07-26 16:41:16.683761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.148 [2024-07-26 16:41:16.683783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.148 [2024-07-26 16:41:16.687954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.148 [2024-07-26 16:41:16.697040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.148 [2024-07-26 16:41:16.697531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.148 [2024-07-26 16:41:16.697571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.148 [2024-07-26 16:41:16.697597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.148 [2024-07-26 16:41:16.697886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.148 [2024-07-26 16:41:16.698189] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.148 [2024-07-26 16:41:16.698221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.148 [2024-07-26 16:41:16.698244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.148 [2024-07-26 16:41:16.702444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.148 [2024-07-26 16:41:16.711531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.148 [2024-07-26 16:41:16.712021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.148 [2024-07-26 16:41:16.712072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.148 [2024-07-26 16:41:16.712115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.148 [2024-07-26 16:41:16.712405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.148 [2024-07-26 16:41:16.712697] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.148 [2024-07-26 16:41:16.712728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.148 [2024-07-26 16:41:16.712750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.148 [2024-07-26 16:41:16.716925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.148 [2024-07-26 16:41:16.725999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.148 [2024-07-26 16:41:16.726486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.148 [2024-07-26 16:41:16.726526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.148 [2024-07-26 16:41:16.726552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.148 [2024-07-26 16:41:16.726841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.148 [2024-07-26 16:41:16.727143] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.148 [2024-07-26 16:41:16.727175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.148 [2024-07-26 16:41:16.727197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.148 [2024-07-26 16:41:16.731384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.148 [2024-07-26 16:41:16.740673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.148 [2024-07-26 16:41:16.741164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.148 [2024-07-26 16:41:16.741206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.148 [2024-07-26 16:41:16.741232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.148 [2024-07-26 16:41:16.741520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.148 [2024-07-26 16:41:16.741811] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.148 [2024-07-26 16:41:16.741842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.148 [2024-07-26 16:41:16.741863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.148 [2024-07-26 16:41:16.745873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.148 [2024-07-26 16:41:16.754810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.148 [2024-07-26 16:41:16.755279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.148 [2024-07-26 16:41:16.755317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.148 [2024-07-26 16:41:16.755340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.148 [2024-07-26 16:41:16.755610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.149 [2024-07-26 16:41:16.755875] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.149 [2024-07-26 16:41:16.755903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.149 [2024-07-26 16:41:16.755924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.149 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:57.149 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:57.149 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:57.149 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:57.149 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:57.149 [2024-07-26 16:41:16.759788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.149 [2024-07-26 16:41:16.769093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.149 [2024-07-26 16:41:16.769579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.149 [2024-07-26 16:41:16.769616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.149 [2024-07-26 16:41:16.769640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.149 [2024-07-26 16:41:16.769917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.149 [2024-07-26 16:41:16.770207] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.149 [2024-07-26 16:41:16.770237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.149 [2024-07-26 16:41:16.770258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.149 [2024-07-26 16:41:16.774140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.149 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:57.149 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:57.149 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.149 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:57.149 [2024-07-26 16:41:16.776447] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:57.149 [2024-07-26 16:41:16.783169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.149 [2024-07-26 16:41:16.783679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.149 [2024-07-26 16:41:16.783730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.149 [2024-07-26 16:41:16.783753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.149 [2024-07-26 16:41:16.784053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.149 [2024-07-26 16:41:16.784316] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.149 [2024-07-26 16:41:16.784368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.149 [2024-07-26 16:41:16.784386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.149 [2024-07-26 16:41:16.788128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.149 [2024-07-26 16:41:16.797469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.149 [2024-07-26 16:41:16.797979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.149 [2024-07-26 16:41:16.798016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.149 [2024-07-26 16:41:16.798039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.149 [2024-07-26 16:41:16.798316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.149 [2024-07-26 16:41:16.798599] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.149 [2024-07-26 16:41:16.798627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.149 [2024-07-26 16:41:16.798646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.149 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.149 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:57.149 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.149 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:57.149 [2024-07-26 16:41:16.802630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.149 [2024-07-26 16:41:16.811632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.149 [2024-07-26 16:41:16.812317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.149 [2024-07-26 16:41:16.812375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.149 [2024-07-26 16:41:16.812414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.149 [2024-07-26 16:41:16.812704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.149 [2024-07-26 16:41:16.812967] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.149 [2024-07-26 16:41:16.812996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.149 [2024-07-26 16:41:16.813019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.149 [2024-07-26 16:41:16.817195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.149 [2024-07-26 16:41:16.825879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.149 [2024-07-26 16:41:16.826499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.149 [2024-07-26 16:41:16.826545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.149 [2024-07-26 16:41:16.826573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.149 [2024-07-26 16:41:16.826860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.149 [2024-07-26 16:41:16.827154] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.149 [2024-07-26 16:41:16.827184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.149 [2024-07-26 16:41:16.827209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.149 [2024-07-26 16:41:16.831072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.149 [2024-07-26 16:41:16.840013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.149 [2024-07-26 16:41:16.840520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.149 [2024-07-26 16:41:16.840557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.149 [2024-07-26 16:41:16.840580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.149 [2024-07-26 16:41:16.840859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.149 [2024-07-26 16:41:16.841146] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.149 [2024-07-26 16:41:16.841176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.149 [2024-07-26 16:41:16.841196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.149 [2024-07-26 16:41:16.844934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.149 [2024-07-26 16:41:16.854196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.149 [2024-07-26 16:41:16.854647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.149 [2024-07-26 16:41:16.854683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.149 [2024-07-26 16:41:16.854707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.149 [2024-07-26 16:41:16.854984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.149 [2024-07-26 16:41:16.855274] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.149 [2024-07-26 16:41:16.855303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.149 [2024-07-26 16:41:16.855323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.149 [2024-07-26 16:41:16.859034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.149 [2024-07-26 16:41:16.868358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.149 [2024-07-26 16:41:16.868886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.149 [2024-07-26 16:41:16.868923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.149 [2024-07-26 16:41:16.868946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.149 [2024-07-26 16:41:16.869216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.149 [2024-07-26 16:41:16.869492] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.149 [2024-07-26 16:41:16.869519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.149 [2024-07-26 16:41:16.869539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.149 [2024-07-26 16:41:16.873380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.149 Malloc0 00:35:57.149 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.150 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:57.150 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.150 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 [2024-07-26 16:41:16.882635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.150 [2024-07-26 16:41:16.883119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.150 [2024-07-26 16:41:16.883155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:57.150 [2024-07-26 16:41:16.883179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:57.150 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.150 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:57.150 [2024-07-26 16:41:16.883442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:57.150 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.150 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 [2024-07-26 16:41:16.883704] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:57.150 [2024-07-26 16:41:16.883733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:57.150 [2024-07-26 16:41:16.883753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:57.150 [2024-07-26 16:41:16.887646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.150 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.150 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:57.150 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.150 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:57.150 [2024-07-26 16:41:16.895298] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:57.150 [2024-07-26 16:41:16.896885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:57.150 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.150 16:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 818627 00:35:57.408 [2024-07-26 16:41:16.946757] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:07.376 00:36:07.376 Latency(us) 00:36:07.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:07.376 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:07.376 Verification LBA range: start 0x0 length 0x4000 00:36:07.376 Nvme1n1 : 15.02 4531.38 17.70 8783.20 0.00 9584.42 1183.29 42913.94 00:36:07.376 =================================================================================================================== 00:36:07.376 Total : 4531.38 17.70 8783.20 0.00 9584.42 1183.29 42913.94 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:07.376 rmmod nvme_tcp 00:36:07.376 rmmod nvme_fabrics 00:36:07.376 rmmod nvme_keyring 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 819289 ']' 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 819289 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 819289 ']' 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 819289 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 819289 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 819289' 00:36:07.376 killing process with pid 819289 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 819289 00:36:07.376 16:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 819289 00:36:08.753 16:41:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:08.753 16:41:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:08.753 16:41:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:08.753 16:41:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:08.753 16:41:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:08.753 16:41:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.753 16:41:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:08.753 16:41:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:10.695 00:36:10.695 real 0m26.555s 00:36:10.695 user 1m12.934s 00:36:10.695 sys 0m4.636s 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:10.695 ************************************ 00:36:10.695 END TEST nvmf_bdevperf 00:36:10.695 ************************************ 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.695 ************************************ 00:36:10.695 START TEST nvmf_target_disconnect 00:36:10.695 ************************************ 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:10.695 * Looking for test storage... 00:36:10.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:10.695 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:36:10.696 16:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:12.601 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:12.601 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:12.601 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:12.601 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:12.601 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:12.602 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:12.602 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:12.602 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:12.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:12.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:36:12.860 00:36:12.860 --- 10.0.0.2 ping statistics --- 00:36:12.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.860 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:12.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:12.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:36:12.860 00:36:12.860 --- 10.0.0.1 ping statistics --- 00:36:12.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.860 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:12.860 ************************************ 00:36:12.860 START TEST nvmf_target_disconnect_tc1 00:36:12.860 ************************************ 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:12.860 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:12.861 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:12.861 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:12.861 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:12.861 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:12.861 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:12.861 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:12.861 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:12.861 EAL: No free 2048 kB hugepages reported on node 1 00:36:12.861 [2024-07-26 16:41:32.610158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.861 [2024-07-26 16:41:32.610285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2000 with addr=10.0.0.2, port=4420 00:36:12.861 [2024-07-26 16:41:32.610374] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:12.861 [2024-07-26 16:41:32.610405] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:12.861 [2024-07-26 16:41:32.610432] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:36:12.861 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:12.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:13.119 Initializing NVMe Controllers 00:36:13.119 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:36:13.119 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:13.119 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:13.119 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:13.120 00:36:13.120 real 0m0.213s 00:36:13.120 user 0m0.089s 00:36:13.120 sys 0m0.123s 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:13.120 ************************************ 00:36:13.120 END TEST nvmf_target_disconnect_tc1 00:36:13.120 ************************************ 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:13.120 ************************************ 00:36:13.120 START TEST nvmf_target_disconnect_tc2 00:36:13.120 ************************************ 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=822703 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 822703 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 822703 ']' 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:13.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:13.120 16:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.120 [2024-07-26 16:41:32.783458] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:13.120 [2024-07-26 16:41:32.783623] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:13.120 EAL: No free 2048 kB hugepages reported on node 1 00:36:13.378 [2024-07-26 16:41:32.923526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:13.642 [2024-07-26 16:41:33.142329] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:13.642 [2024-07-26 16:41:33.142390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:13.642 [2024-07-26 16:41:33.142415] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:13.643 [2024-07-26 16:41:33.142435] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:13.643 [2024-07-26 16:41:33.142454] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:13.643 [2024-07-26 16:41:33.142801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:36:13.643 [2024-07-26 16:41:33.142921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:36:13.643 [2024-07-26 16:41:33.142986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:36:13.643 [2024-07-26 16:41:33.143033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:14.206 Malloc0 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:14.206 [2024-07-26 16:41:33.826162] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:14.206 [2024-07-26 16:41:33.855767] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=822856 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:14.206 16:41:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:14.464 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.379 16:41:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 822703 00:36:16.379 16:41:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 [2024-07-26 16:41:35.894165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 [2024-07-26 16:41:35.894878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Write completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.379 Read completed with error (sct=0, sc=8) 00:36:16.379 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 [2024-07-26 16:41:35.895604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Write completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 Read completed with error (sct=0, sc=8) 00:36:16.380 starting I/O failed 00:36:16.380 [2024-07-26 16:41:35.896255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.380 [2024-07-26 16:41:35.896556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.896629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.896863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.896921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.897131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.897168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.897354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.897390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.897553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.897604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.897826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.897864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.898105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.898140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.898337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.898386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.898596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.898654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.899116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.899151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.899309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.899343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.899595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.899632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.899919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.899954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.900113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.900148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.900325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.900359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.900554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.900588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.900813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.900876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.901131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.901167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.901330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.901380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.901705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.380 [2024-07-26 16:41:35.901757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.380 qpair failed and we were unable to recover it. 00:36:16.380 [2024-07-26 16:41:35.901923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.901960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.902151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.902187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.902374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.902409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.902624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.902677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.902875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.902910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.903095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.903131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.903306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.903356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.903562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.903595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.903807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.903856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.904040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.904100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.904305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.904340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.904517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.904552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.904767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.904802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.904962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.904997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.905199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.905236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.905415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.905489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.905712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.905749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.905964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.906016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.906217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.906253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.906468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.906517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.906744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.906845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.907057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.907102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.907258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.907292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.907487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.907537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.907907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.907965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.908239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.908273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.908491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.908553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.908880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.908941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.909173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.909215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.909406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.909456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.909684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.909719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.909916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.909955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.910174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.910209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.910428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.910462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.910636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.910676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.910871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.910910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.911125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.911161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.381 qpair failed and we were unable to recover it. 00:36:16.381 [2024-07-26 16:41:35.911673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.381 [2024-07-26 16:41:35.911707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.911880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.911914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.912176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.912211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.912362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.912413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.912636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.912670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.912856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.912890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.913109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.913143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.913321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.913371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.913574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.913612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.913845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.913880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.914067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.914114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.914259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.914292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.914568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.914602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.914823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.914857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.915057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.915103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.915273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.915307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.915516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.915552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.915775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.915809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.916004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.916055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.916316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.916351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.916627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.916662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.916862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.916908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.917210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.917245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.917477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.917515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.917771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.917804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.917979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.918012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.918193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.918227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.918470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.918504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.918703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.918736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.918925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.918960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.919149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.919187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.919375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.919412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.919599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.919633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.919863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.919896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.920066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.920101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.920290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.920339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.920488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.920521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.920742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.382 [2024-07-26 16:41:35.920775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.382 qpair failed and we were unable to recover it. 00:36:16.382 [2024-07-26 16:41:35.920979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.921012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.921281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.921325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.921605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.921659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.921862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.921900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.922073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.922118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.922344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.922382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.922622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.922653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.922925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.922960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.923214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.923248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.923456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.923505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.923729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.923779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.924019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.924056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.924282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.924316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.924492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.924527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.924801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.924850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.925077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.925114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.925299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.925333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.925514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.925548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.925770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.925824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.926027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.926067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.926274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.926324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.926564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.926601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.926750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.926799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.927019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.927053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.927259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.927293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.927441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.927489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.927667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.927701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.928015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.928053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.928293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.928327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.928499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.928548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.928792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.928828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.929010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.929045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.929232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.929266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.383 qpair failed and we were unable to recover it. 00:36:16.383 [2024-07-26 16:41:35.929516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.383 [2024-07-26 16:41:35.929555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.929780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.929814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.929999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.930032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.930278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.930313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.930611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.930644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.930856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.930889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.931072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.931118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.931293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.931327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.931483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.931517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.931709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.931742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.932010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.932066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.932260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.932293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.932479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.932514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.932714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.932747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.933002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.933036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.933295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.933347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.933559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.933593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.933752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.933800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.934018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.934053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.934226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.934260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.934464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.934528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.934803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.934837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.935039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.935081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.935268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.935302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.935484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.935518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.935730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.935764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.935933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.935981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.936186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.936221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.936423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.936471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.936662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.936711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.936925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.936958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.937137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.937172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.937383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.937417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.384 qpair failed and we were unable to recover it. 00:36:16.384 [2024-07-26 16:41:35.937599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.384 [2024-07-26 16:41:35.937632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.937814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.937848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.938077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.938130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.938289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.938338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.938543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.938576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.938793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.938827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.938973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.939007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.939197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.939236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.939421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.939470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.939687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.939741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.939931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.939965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.940153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.940189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.940343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.940393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.940582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.940631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.940821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.940854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.941039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.941097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.941317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.941352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.941634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.941679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.941915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.941953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.942150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.942184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.942393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.942461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.942693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.942727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.942888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.942937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.943150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.943186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.943342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.943376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.943653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.943686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.943858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.943892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.944101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.944135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.944293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.944327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.944545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.944579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.944739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.944773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.944977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.945011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.945215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.945251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.945452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.945486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.945662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.945700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.945855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.945889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.946092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.946127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.946292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.385 [2024-07-26 16:41:35.946326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.385 qpair failed and we were unable to recover it. 00:36:16.385 [2024-07-26 16:41:35.946524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.946577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.946758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.946792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.946951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.946999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.947203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.947239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.947441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.947474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.947669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.947702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.947901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.947938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.948117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.948151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.948354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.948404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.948599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.948632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.948868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.948905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.949084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.949118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.949320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.949384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.949634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.949670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.949862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.949912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.950137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.950173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.950345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.950379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.950579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.950614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.950764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.950809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.950962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.950996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.951164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.951199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.951375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.951414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.951622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.951656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.951862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.951896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.952074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.952108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.952280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.952314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.952513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.952547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.952773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.952839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.953071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.953105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.953306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.953339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.953525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.953596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.953800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.953835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.953990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.954024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.954237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.954271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.954447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.954481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.954636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.954669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.954873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.954910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.386 qpair failed and we were unable to recover it. 00:36:16.386 [2024-07-26 16:41:35.955093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.386 [2024-07-26 16:41:35.955128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.955331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.955364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.955536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.955570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.955767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.955801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.955973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.956007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.956192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.956225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.956432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.956465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.956650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.956684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.956881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.956918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.957149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.957183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.957334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.957367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.957527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.957566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.957737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.957770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.957948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.957982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.958223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.958256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.958441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.958474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.958653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.958687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.958839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.958872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.959045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.959087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.959241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.959275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.959477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.959510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.959656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.959689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.959868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.959901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.960089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.960123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.960300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.960344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.960522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.960556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.960761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.960794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.961001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.961035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.961216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.961249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.961399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.961450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.961651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.961685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.961876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.961910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.962094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.962139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.962357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.962395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.962604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.962638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.962782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.962816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.962996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.963030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.963218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.387 [2024-07-26 16:41:35.963251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.387 qpair failed and we were unable to recover it. 00:36:16.387 [2024-07-26 16:41:35.963433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.963467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.963644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.963701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.963911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.963946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.964121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.964156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.964328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.964363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.964548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.964583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.964743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.964778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.964980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.965014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.965193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.965227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.965436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.965487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.965652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.965689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.965849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.965884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.966074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.966115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.966278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.966319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.966497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.966533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.966880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.966935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.967141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.967180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.967383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.967419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.967598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.967633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.967813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.967848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.968028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.968072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.968317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.968353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.968500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.968536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.968792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.968828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.969039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.969087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.969287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.969321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.969513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.969549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.969737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.969772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.969976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.970014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.970254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.970289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.970559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.388 [2024-07-26 16:41:35.970620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.388 qpair failed and we were unable to recover it. 00:36:16.388 [2024-07-26 16:41:35.970846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.970887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.971117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.971152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.971362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.971396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.971559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.971594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.971854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.971888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.972107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.972145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.972380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.972413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.972622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.972656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.972814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.972866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.973065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.973108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.973288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.973338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.973662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.973724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.973955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.973989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.974193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.974227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.974372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.974406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.974609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.974644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.974826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.974862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.975017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.975052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.975283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.975320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.975528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.975563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.975820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.975854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.976082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.976134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.976331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.976365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.976544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.976578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.976760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.976795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.977014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.977049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.977264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.977298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.977465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.977500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.977685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.977719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.977918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.977952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.978127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.978161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.978322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.978355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.978535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.978569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.978771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.978822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.979087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.979121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.979323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.979358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.979558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.979593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.389 [2024-07-26 16:41:35.979804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.389 [2024-07-26 16:41:35.979838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.389 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.980019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.980054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.980318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.980357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.980558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.980623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.980926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.980983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.981183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.981218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.981365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.981399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.981628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.981687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.981882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.981920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.982122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.982157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.982338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.982371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.982551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.982592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.982795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.982828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.982985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.983023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.983201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.983234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.983391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.983426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.983633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.983667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.983869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.983903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.984066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.984111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.984307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.984353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.984552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.984587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.984765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.984799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.984984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.985020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.985245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.985280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.985490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.985525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.985729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.985798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.986031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.986075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.986271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.986305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.986455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.986491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.986638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.986690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.986892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.986928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.987110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.987156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.987390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.987428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.987652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.987686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.987918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.987956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.988171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.988205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.988386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.988421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.988630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.390 [2024-07-26 16:41:35.988664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.390 qpair failed and we were unable to recover it. 00:36:16.390 [2024-07-26 16:41:35.988881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.988919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.989122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.989157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.989321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.989355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.989537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.989590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.989796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.989831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.990012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.990046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.990241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.990275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.990485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.990519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.990776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.990811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.991018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.991052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.991225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.991259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.991420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.991454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.991683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.991721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.991900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.991933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.992134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.992166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.992310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.992352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.992557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.992593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.992741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.992773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.992936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.992968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.993149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.993181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.993361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.993393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.993541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.993572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.993745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.993776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.993969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.994004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.994225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.994257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.994427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.994459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.994636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.994668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.994864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.994916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.995165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.995199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.995359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.995405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.995612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.995651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.995844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.995878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.996067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.996102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.996369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.996404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.996656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.996696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.996909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.996944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.997138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.997172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.391 [2024-07-26 16:41:35.997427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.391 [2024-07-26 16:41:35.997468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.391 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:35.997654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:35.997689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:35.997845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:35.997879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:35.998467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:35.998511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:35.998708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:35.998747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:35.998987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:35.999026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:35.999227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:35.999262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:35.999474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:35.999508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:35.999737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:35.999774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:35.999968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.000002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.000171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.000206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.000388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.000426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.000676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.000719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.000941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.000975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.001137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.001172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.001442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.001477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.001694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.001750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.002042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.002094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.002297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.002342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.002637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.002678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.002911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.002954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.003146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.003183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.003357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.003402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.003564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.003603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.003817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.003854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.004119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.004154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.004309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.004357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.004623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.004659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.004980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.005038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.005253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.005289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.005482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.005517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.005804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.005842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.006044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.006091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.006259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.006294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.006482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.006520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.006731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.006770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.006960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.006994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.007156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.392 [2024-07-26 16:41:36.007192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.392 qpair failed and we were unable to recover it. 00:36:16.392 [2024-07-26 16:41:36.007378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.007413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.007583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.007618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.007807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.007846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.008056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.008105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.008290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.008335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.008513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.008564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.008765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.008805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.008992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.009026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.009206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.009241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.009421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.009479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.009722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.009757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.009978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.010016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.010205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.010239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.010430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.010466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.010743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.010778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.010934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.010968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.011148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.011184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.011396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.011438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.011655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.011706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.011917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.011951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.012130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.012178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.012359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.012398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.012595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.012630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.012807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.012841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.013045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.013092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.013299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.013340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.013521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.013556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.013763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.013802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.013991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.014026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.014221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.014256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.014469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.014507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.393 [2024-07-26 16:41:36.014698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.393 [2024-07-26 16:41:36.014734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.393 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.014912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.014954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.015153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.015194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.015411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.015447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.015654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.015695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.015881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.015922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.016121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.016155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.016363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.016402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.016589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.016624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.016778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.016812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.016974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.017009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.017167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.017206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.017358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.017393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.017582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.017633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.017856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.017891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.018055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.018095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.018264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.018313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.018505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.018548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.018854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.018891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.019133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.019172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.019356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.019391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.019606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.019640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.019831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.019874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.020084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.020132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.020354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.020389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.020561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.020596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.020745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.020780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.020940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.020974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.021171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.021208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.021396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.021436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.021638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.021673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.021898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.021934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.022205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.022241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.022450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.022485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.022772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.022827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.023073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.023114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.023258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.023296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.023508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.394 [2024-07-26 16:41:36.023568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.394 qpair failed and we were unable to recover it. 00:36:16.394 [2024-07-26 16:41:36.023778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.023820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.024034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.024079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.024307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.024355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.024548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.024590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.024801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.024836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.025033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.025078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.025282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.025319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.025499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.025537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.025807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.025867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.026083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.026128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.026306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.026349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.026527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.026565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.026777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.026812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.026976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.027011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.027237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.027272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.027440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.027476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.027681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.027716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.027873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.027910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.028112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.028147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.028334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.028370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.028566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.028610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.028821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.028863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.029102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.029137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.029334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.029374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.029581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.029619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.029852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.029890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.030103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.030144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.030370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.030409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.030637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.030672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.030871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.030911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.031118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.031157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.031364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.031404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.031692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.031747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.031977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.032016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.032241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.032287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.032512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.032546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.032699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.032742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.395 qpair failed and we were unable to recover it. 00:36:16.395 [2024-07-26 16:41:36.032900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.395 [2024-07-26 16:41:36.032946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.033107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.033142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.033318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.033382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.033622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.033657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.033889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.033926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.034093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.034137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.034314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.034357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.034587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.034626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.034867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.034906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.035137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.035172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.035367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.035405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.035644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.035683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.035941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.035975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.036182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.036217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.036359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.036393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.036603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.036637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.036883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.036922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.037107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.037146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.037348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.037384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.037626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.037664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.037904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.037943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.038129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.038164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.038369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.038404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.038586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.038621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.038847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.038883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.039132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.039182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.039365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.039405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.039611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.039646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.039962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.040022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.040207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.040243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.040504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.040539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.040809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.040844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.041044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.041087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.041269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.041302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.041678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.041739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.041999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.042036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.042213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.042248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.042543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.396 [2024-07-26 16:41:36.042611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.396 qpair failed and we were unable to recover it. 00:36:16.396 [2024-07-26 16:41:36.042834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.042871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.043092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.043137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.043312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.043362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.043564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.043602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.043877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.043915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.044135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.044170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.044347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.044394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.044550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.044585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.044788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.044822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.045031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.045074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.045295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.045338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.045543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.045578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.045756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.045791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.045965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.045999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.046255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.046289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.046525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.046563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.046736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.046771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.047054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.047103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.047262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.047299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.047504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.047538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.047858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.047898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.048081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.048124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.048300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.048342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.048571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.048609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.048810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.048848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.049052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.049092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.049280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.049318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.049540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.049578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.049783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.049817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.050000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.050034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.050224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.050258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.050444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.050478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.050680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.050718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.050923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.050961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.051130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.051165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.397 qpair failed and we were unable to recover it. 00:36:16.397 [2024-07-26 16:41:36.051337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.397 [2024-07-26 16:41:36.051389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.051559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.051602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.051782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.051818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.052002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.052038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.052313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.052350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.052547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.052582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.052764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.052798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.052974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.053009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.053166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.053201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.053428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.053467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.053671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.053711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.053914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.053949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.054143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.054179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.054370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.054408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.054602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.054638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.054844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.054883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.055089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.055124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.055308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.055342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.055518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.055554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.055775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.055811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.055996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.056031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.056270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.056324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.056568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.056613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.056776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.056813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.057016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.057051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.057257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.057296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.057515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.057549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.057742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.057792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.058030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.058073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.058229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.058264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.058464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.058515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.058709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.058747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.058933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.058969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.059176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.398 [2024-07-26 16:41:36.059211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.398 qpair failed and we were unable to recover it. 00:36:16.398 [2024-07-26 16:41:36.059471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.059506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.059703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.059738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.059948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.059983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.060173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.060220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.060372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.060413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.060586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.060637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.060893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.060929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.061120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.061165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.061409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.061449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.061685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.061724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.061911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.061958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.062204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.062242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.062401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.062450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.062672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.062706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.062949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.062983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.063213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.063252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.063492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.063526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.063735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.063770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.063941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.063975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.064180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.064215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.064394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.064438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.064663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.064698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.064866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.064900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.065150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.065188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.065348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.065387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.065582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.065617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.065836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.065873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.066070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.066110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.066312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.066347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.066525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.066560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.066729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.066766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.066941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.066975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.067162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.067214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.067458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.067498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.067695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.067736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.067900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.067948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.399 [2024-07-26 16:41:36.068130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.399 [2024-07-26 16:41:36.068166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.399 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.068320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.068357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.068528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.068567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.068787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.068825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.069030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.069077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.069239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.069275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.069501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.069540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.069766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.069801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.070003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.070038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.070248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.070283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.070473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.070513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.070836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.070893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.071126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.071165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.071377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.071412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.071696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.071756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.071978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.072017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.072199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.072234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.072390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.072442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.072669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.072707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.072895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.072930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.073127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.073177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.073435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.073473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.073674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.073721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.073877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.073911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.074097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.074136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.074322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.074357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.074532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.074567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.074761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.074799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.074971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.075004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.075175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.075209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.075433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.075471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.075666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.075700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.075901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.075939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.076138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.076177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.076380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.076414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.076763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.076819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.077018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.077053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.077251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.400 [2024-07-26 16:41:36.077286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.400 qpair failed and we were unable to recover it. 00:36:16.400 [2024-07-26 16:41:36.077463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.077502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.077724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.077759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.077939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.077972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.078154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.078190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.078371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.078406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.078648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.078682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.078887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.078926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.079154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.079193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.079388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.079423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.079601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.079636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.079820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.079854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.080049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.080093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.080304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.080343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.080539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.080576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.080791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.080826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.081046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.081098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.081316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.081351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.081518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.081554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.081839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.081896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.082111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.082147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.082311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.082345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.082527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.082575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.082765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.082799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.082959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.082992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.083251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.083286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.083495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.083529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.083747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.083781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.083968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.084006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.084211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.084246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.084503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.084536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.084757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.084790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.084981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.085016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.085217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.085252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.085461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.085499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.085699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.085749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.085928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.085962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.086201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.086240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.401 [2024-07-26 16:41:36.086429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.401 [2024-07-26 16:41:36.086467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.401 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.086686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.086735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.086936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.086970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.087145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.087185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.087371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.087406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.087582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.087620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.087820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.087858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.088047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.088103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.088373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.088411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.088621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.088671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.088894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.088929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.089289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.089348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.089565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.089596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.089866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.089901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.090153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.090192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.090469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.090507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.090753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.090786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.091066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.091104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.091299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.091335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.091501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.091535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.091720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.091754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.091987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.092025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.092239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.092274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.092444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.092477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.092669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.092703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.092880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.092915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.093097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.093132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.093319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.093357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.093525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.093561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.093751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.093785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.093996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.094034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.094274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.094309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.402 qpair failed and we were unable to recover it. 00:36:16.402 [2024-07-26 16:41:36.094534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.402 [2024-07-26 16:41:36.094572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.094851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.094884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.095085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.095119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.095367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.095401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.095629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.095667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.095945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.095978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.096187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.096226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.096421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.096460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.096668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.096702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.096896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.096931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.097112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.097147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.097327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.097366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.097514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.097564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.097762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.097797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.097983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.098017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.098204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.098239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.098458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.098496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.098690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.098725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.098962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.098996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.099240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.099275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.099584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.099638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.099840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.099877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.100091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.100141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.100320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.100353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.100596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.100630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.100911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.100946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.101150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.101186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.101394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.101428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.101604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.101640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.101816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.101856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.102072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.102110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.102306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.102344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.102516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.102550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.102772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.102811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.103041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.103090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.103310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.103344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.103579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.103613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.403 [2024-07-26 16:41:36.103845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.403 [2024-07-26 16:41:36.103883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.403 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.104104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.104140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.104318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.104357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.104567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.104602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.104783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.104818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.105036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.105098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.105308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.105352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.105563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.105597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.105794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.105832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.106026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.106071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.106253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.106287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.106462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.106497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.106704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.106743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.106971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.107006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.107190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.107229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.107454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.107492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.107657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.107692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.107959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.107993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.108202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.108241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.108408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.108443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.108635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.108674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.108903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.108941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.109149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.109185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.109369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.109403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.109560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.109599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.109802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.109836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.110046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.110088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.110240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.110273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.110478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.110512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.110711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.110749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.111010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.111044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.111226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.111261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.111472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.111506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.111755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.111790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.111996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.112031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.112288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.112326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.112551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.112586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.112748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.404 [2024-07-26 16:41:36.112784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.404 qpair failed and we were unable to recover it. 00:36:16.404 [2024-07-26 16:41:36.112953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.112991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.113201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.113236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.113440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.113474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.113657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.113692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.113880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.113918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.114153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.114187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.114388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.114425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.114635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.114673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.114892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.114926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.115168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.115203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.115382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.115416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.115640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.115674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.115855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.115889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.116101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.116154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.116352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.116387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.116571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.116606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.116815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.116857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.117087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.117122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.117351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.117389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.117578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.117613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.117789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.117823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.118018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.118056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.118285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.118322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.118498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.118533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.118757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.118794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.119000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.119035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.119249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.119283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.119487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.119525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.119716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.119754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.119952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.119987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.120151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.120186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.120425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.120464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.120664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.120708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.120920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.120955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.121184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.121223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.121425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.121460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.121608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.121644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.405 [2024-07-26 16:41:36.121837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.405 [2024-07-26 16:41:36.121875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.405 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.122042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.122084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.122274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.122309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.122518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.122556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.122736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.122770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.122996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.123034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.123277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.123314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.123506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.123541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.123713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.123751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.123948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.123986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.124171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.124206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.124406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.124444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.124637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.124675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.124853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.124887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.125100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.125140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.125380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.125414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.125590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.125623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.125783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.125818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.125991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.126026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.126192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.126231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.126416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.126450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.126632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.126666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.126850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.126884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.406 [2024-07-26 16:41:36.127045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.406 [2024-07-26 16:41:36.127102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.406 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.127326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.127378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.127580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.127615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.127828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.127862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.128051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.128091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.128257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.128292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.128441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.128475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.128668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.128705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.128890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.128927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.129114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.129149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.129304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.129355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.129592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.129626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.129818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.129855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.130096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.130131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.130285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.130319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.130499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.130533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.130705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.130739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.130977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.131015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.131237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.131272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.131507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.131550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.131821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.131855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.132074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.132109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.132298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.132348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.132723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.132785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.132978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.133015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.133200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.133235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.133439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.133472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.133633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.133667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.133855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.133892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.134089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.134124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.134298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.134347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.134594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.134631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.134856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.134893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.135152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.135187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.135335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.135370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.135743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.135781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.135975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.136027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.682 qpair failed and we were unable to recover it. 00:36:16.682 [2024-07-26 16:41:36.136238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.682 [2024-07-26 16:41:36.136272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.136536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.136605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.136845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.136898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.137076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.137112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.137298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.137332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.137511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.137545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.137737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.137788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.137960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.137995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.138224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.138263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.138458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.138495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.138692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.138751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.138958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.138992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.139147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.139183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.139390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.139425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.139617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.139656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.139882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.139919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.140128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.140163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.140316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.140360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.140547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.140585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.140901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.140970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.141185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.141219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.141422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.141460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.141769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.141830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.142028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.142068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.142254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.142289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.142699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.142771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.142984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.143021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.143261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.143295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.143514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.143571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.143792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.143830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.144031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.144074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.144257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.144291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.144489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.144527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.144777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.144815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.144993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.145027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.145198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.145231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.145427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.683 [2024-07-26 16:41:36.145462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.683 qpair failed and we were unable to recover it. 00:36:16.683 [2024-07-26 16:41:36.145613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.145678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.145888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.145922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.146077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.146122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.146338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.146374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.146528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.146563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.146801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.146836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.147026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.147069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.147262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.147297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.147493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.147527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.147783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.147817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.147979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.148014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.148215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.148251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.148427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.148474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.148649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.148684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.148850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.148887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.149093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.149130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.149319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.149353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.149571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.149605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.149761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.149798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.150030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.150072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.150230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.150265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.150502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.150570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.150794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.150831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.151015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.151050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.151213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.151248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.151450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.151484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.151690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.151743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.151916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.151951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.152154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.152209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.152474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.152529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.152711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.152751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.152920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.152958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.153152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.153188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.153402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.153439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.153639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.153676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.153845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.153882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.154144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.154180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.684 qpair failed and we were unable to recover it. 00:36:16.684 [2024-07-26 16:41:36.154384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.684 [2024-07-26 16:41:36.154417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.154569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.154602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.154862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.154895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.155049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.155095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.155256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.155289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.155478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.155522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.155725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.155762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.155936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.155969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.156150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.156184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.156333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.156385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.156611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.156647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.156843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.156881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.157076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.157127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.157329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.157363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.157520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.157564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.157738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.157775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.158021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.158066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.158235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.158269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.158491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.158528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.158789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.158853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.159054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.159114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.159297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.159330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.159521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.159555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.159767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.159805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.160030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.160068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.160230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.160263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.160468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.160505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.160681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.160718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.160979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.161012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.161192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.161226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.161397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.161434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.161635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.161673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.161904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.161942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.162156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.162189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.162365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.162409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.162595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.162633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.162803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.162840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.163011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.685 [2024-07-26 16:41:36.163045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-07-26 16:41:36.163210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.163244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.163437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.163472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.163670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.163707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.163884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.163937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.164158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.164193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.164341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.164374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.164528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.164562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.164781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.164832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.165110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.165145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.165300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.165352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.165548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.165584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.165792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.165828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.166028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.166069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.166259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.166291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.166449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.166481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.166636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.166687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.166871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.166907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.167098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.167132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.167316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.167349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.167568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.167601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.167745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.167778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.167986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.168023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.168198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.168235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.168410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.168443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.168643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.168679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.168852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.168889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.169084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.169118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.169283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.169331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.169537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.169581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-07-26 16:41:36.169762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.686 [2024-07-26 16:41:36.169795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.169985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.170021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.170195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.170232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.170431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.170464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.170642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.170680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.170882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.170932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.171124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.171162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.171345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.171380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.171599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.171647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.171837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.171882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.172102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.172141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.172318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.172359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.172582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.172616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.172807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.172842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.172991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.173025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.173216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.173251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.173448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.173486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.173695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.173729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.173911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.173952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.174181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.174223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.174392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.174429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.174604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.174638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.174812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.174853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.175025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.175073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.175280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.175314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.175538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.175575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.175754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.175793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.175998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.176034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.176225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.176262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.176457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.176499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.176685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.176719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.176951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.176989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.177199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.177238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.177451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.177488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.177670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.177713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.177890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.177928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.178134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.178169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.687 qpair failed and we were unable to recover it. 00:36:16.687 [2024-07-26 16:41:36.178372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.687 [2024-07-26 16:41:36.178410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.178623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.178658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.178841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.178878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.179070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.179116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.179299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.179333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.179510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.179545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.179752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.179791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.179975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.180010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.180203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.180241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.180454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.180496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.180741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.180779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.180947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.180993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.181161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.181204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.181402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.181440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.181685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.181721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.181928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.181965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.182151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.182193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.182376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.182411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.182619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.182657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.182886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.182921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.183116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.183152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.183323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.183366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.183577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.183616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.183807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.183842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.184041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.184086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.184304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.184342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.184518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.184557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.184757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.184796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.185044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.185086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.185243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.185281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.185485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.185524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.185689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.185728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.185932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.185967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.186139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.186175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.186365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.186403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.186610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.186646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.186865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.186902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.688 [2024-07-26 16:41:36.187091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.688 [2024-07-26 16:41:36.187130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.688 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.187309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.187344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.187551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.187589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.187782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.187824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.188010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.188044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.188230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.188268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.188463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.188503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.188706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.188752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.188958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.188996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.189198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.189237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.189459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.189493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.189720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.189765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.189995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.190033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.190256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.190291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.190496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.190534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.190705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.190742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.190941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.190976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.191179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.191218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.191429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.191464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.191644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.191678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.191884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.191921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.192118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.192156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.192379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.192413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.192617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.192655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.192887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.192924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.193148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.193183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.193368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.193405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.193630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.193668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.193870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.193904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.194111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.194150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.194331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.194365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.194536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.194571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.194773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.194810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.195036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.195079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.195265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.195300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.195526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.195564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.195753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.195791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.195986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.689 [2024-07-26 16:41:36.196020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.689 qpair failed and we were unable to recover it. 00:36:16.689 [2024-07-26 16:41:36.196215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.196260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.196462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.196500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.196719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.196752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.196989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.197027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.197222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.197261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.197485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.197519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.197734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.197788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.198012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.198050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.198249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.198284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.198492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.198529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.198714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.198751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.198980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.199014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.199269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.199303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.199502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.199546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.199722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.199757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.199966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.200019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.200240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.200274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.200450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.200484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.200822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.200882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.201089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.201123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.201305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.201339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.201638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.201694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.202017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.202085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.202276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.202309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.202619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.202678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.202846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.202885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.203119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.203154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.203307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.203341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.203592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.203630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.203858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.203892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.204094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.204147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.204299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.204334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.204571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.204605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.204963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.205020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.205224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.205259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.205433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.205467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.205758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.205793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.690 [2024-07-26 16:41:36.205996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.690 [2024-07-26 16:41:36.206034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.690 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.206264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.206298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.206520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.206558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.206770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.206810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.207025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.207066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.207256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.207305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.207477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.207514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.207709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.207744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.207921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.207958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.208160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.208195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.208395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.208428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.208787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.208853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.209052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.209096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.209281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.209316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.209546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.209584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.209809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.209847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.210071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.210111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.210340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.210377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.210577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.210611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.210794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.210828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.211024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.211066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.211262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.211299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.211496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.211535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.211844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.211903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.212126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.212161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.212339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.212371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.212602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.212635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.212835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.212874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.213114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.213149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.213294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.213327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.213570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.213607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.213831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.691 [2024-07-26 16:41:36.213864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.691 qpair failed and we were unable to recover it. 00:36:16.691 [2024-07-26 16:41:36.214067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.214119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.214344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.214380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.214553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.214584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.214833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.214872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.215085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.215145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.215359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.215402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.215561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.215594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.215790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.215840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.216027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.216069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.216278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.216312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.216508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.216546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.216754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.216787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.216992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.217030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.217275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.217313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.217519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.217553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.217725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.217776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.218006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.218039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.218225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.218259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.218433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.218487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.218664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.218713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.218907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.218941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.219096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.219130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.219294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.219327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.219528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.219562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.219764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.219808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.220005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.220043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.220243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.220276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.220518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.220573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.220791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.220833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.221003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.221048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.221284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.221321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.221523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.221557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.221735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.221769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.221981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.222023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.222232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.222267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.222448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.222483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.222682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.692 [2024-07-26 16:41:36.222720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.692 qpair failed and we were unable to recover it. 00:36:16.692 [2024-07-26 16:41:36.222933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.222982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.223196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.223231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.223396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.223435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.223661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.223699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.223909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.223943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.224130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.224169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.224364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.224401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.224595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.224629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.224902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.224959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.225159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.225197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.225408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.225443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.225650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.225682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.225936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.225974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.226183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.226217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.226417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.226480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.226730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.226770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.226962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.227012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.227323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.227362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.227558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.227595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.227821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.227855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.228087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.228130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.228325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.228362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.228589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.228622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.228955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.229013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.229256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.229290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.229533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.229566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.229856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.229913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.230143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.230186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.230372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.230406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.230641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.230678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.230881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.230919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.231123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.231157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.231328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.231364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.231561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.231599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.231797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.231831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.232055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.232110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.232345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.232378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.693 [2024-07-26 16:41:36.232565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.693 [2024-07-26 16:41:36.232599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.693 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.232820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.232878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.233107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.233145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.233337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.233370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.233569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.233607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.233774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.233811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.234009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.234042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.234249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.234286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.234496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.234535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.234732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.234765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.234963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.235000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.235184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.235217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.235380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.235413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.235683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.235739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.235940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.235978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.236154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.236189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.236385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.236423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.236648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.236685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.236878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.236911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.237117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.237156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.237353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.237392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.237592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.237626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.237804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.237839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.238086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.238132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.238306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.238346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.238632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.238690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.238913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.238951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.239143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.239176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.239345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.239382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.239581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.239615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.239776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.239815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.239992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.240026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.240230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.240264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.240462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.240495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.240852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.240906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.241138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.241177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.241398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.241433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.241654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.694 [2024-07-26 16:41:36.241719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.694 qpair failed and we were unable to recover it. 00:36:16.694 [2024-07-26 16:41:36.241947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.241985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.242155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.242190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.242390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.242438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.242637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.242673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.242849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.242883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.243077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.243127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.243356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.243404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.243604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.243638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.243842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.243876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.244040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.244084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.244298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.244343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.244593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.244631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.244791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.244828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.245023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.245056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.245270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.245304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.245545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.245583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.245758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.245793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.245984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.246032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.246251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.246289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.246532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.246566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.246883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.246952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.247177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.247215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.247424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.247459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.247639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.247672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.247863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.247898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.248089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.248132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.248343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.248388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.248557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.248594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.248773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.248806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.248984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.249018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.249193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.249227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.249395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.249429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.249638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.249680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.249900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.249937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.250142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.250176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.250357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.695 [2024-07-26 16:41:36.250390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.695 qpair failed and we were unable to recover it. 00:36:16.695 [2024-07-26 16:41:36.250591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.250628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.250836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.250871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.251055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.251098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.251285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.251319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.251505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.251548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.251783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.251821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.252046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.252093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.252329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.252362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.252532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.252571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.252757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.252795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.252997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.253031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.253219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.253253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.253475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.253514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.253714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.253748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.253977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.254014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.254241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.254275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.254449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.254483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.254672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.254708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.254918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.254955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.255181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.255215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.255374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.255407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.255577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.255610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.255851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.255885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.256112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.256151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.256371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.256408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.256572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.256605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.256810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.256848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.257056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.257133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.257361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.257395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.257568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.257605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.257806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.257844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.258052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.258093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.258324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.258362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.696 [2024-07-26 16:41:36.258555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.696 [2024-07-26 16:41:36.258592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.696 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.258797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.258830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.259026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.259073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.259275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.259317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.259519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.259553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.259761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.259798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.259990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.260023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.260224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.260258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.260406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.260441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.260652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.260690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.260927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.260961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.261183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.261221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.261447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.261481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.261670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.261716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.261896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.261931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.262162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.262200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.262423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.262457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.262655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.262692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.262874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.262912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.263117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.263151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.263356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.263393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.263638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.263672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.263846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.263891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.264072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.264106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.264282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.264316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.264525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.264559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.264783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.264820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.265036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.265083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.265311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.265344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.265591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.265634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.265807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.265844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.266045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.266099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.266272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.266309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.266518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.266552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.266733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.266768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.266972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.267009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.267210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.267245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.267450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.697 [2024-07-26 16:41:36.267485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.697 qpair failed and we were unable to recover it. 00:36:16.697 [2024-07-26 16:41:36.267683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.267721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.267892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.267929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.268104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.268139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.268293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.268327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.268528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.268566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.268738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.268777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.269014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.269051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.269229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.269266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.269423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.269457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.269631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.269668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.269857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.269894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.270091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.270134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.270339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.270376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.270599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.270636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.270825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.270860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.271073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.271118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.271361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.271394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.271591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.271625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.271843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.271892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.272128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.272166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.272359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.272402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.272579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.272613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.272835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.272881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.273088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.273122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.273316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.273353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.273521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.273558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.273755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.273789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.274020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.274064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.274287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.274325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.274532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.274566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.274779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.274816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.275023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.275079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.275296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.275331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.275532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.275569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.275732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.275771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.275971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.276005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.276217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.276255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.698 [2024-07-26 16:41:36.276451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.698 [2024-07-26 16:41:36.276489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.698 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.276697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.276730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.276949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.276983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.277175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.277215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.277386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.277419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.277653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.277692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.277901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.277938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.278141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.278175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.278373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.278415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.278580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.278618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.278844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.278878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.279096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.279144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.279334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.279372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.279592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.279626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.279821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.279859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.280045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.280091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.280294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.280328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.280520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.280557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.280747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.280784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.281005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.281039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.281244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.281282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.281455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.281489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.281684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.281718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.281919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.281957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.282154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.282193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.282388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.282423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.282647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.282684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.282907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.282944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.283146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.283180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.283384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.283422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.283601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.283639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.283865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.283899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.284127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.284166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.284329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.284367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.284575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.284609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.284863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.284900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.285122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.285160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.285360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.699 [2024-07-26 16:41:36.285394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.699 qpair failed and we were unable to recover it. 00:36:16.699 [2024-07-26 16:41:36.285611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.285649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.285883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.285921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.286125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.286160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.286386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.286423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.286670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.286708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.286933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.286967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.287182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.287218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.287398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.287431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.287618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.287652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.287878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.287938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.288136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.288179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.288383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.288417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.288621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.288659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.288826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.288864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.289067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.289102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.289296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.289334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.289535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.289573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.289779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.289812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.290016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.290053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.290276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.290313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.290520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.290558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.290771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.290805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.291003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.291040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.291280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.291317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.291566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.291601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.291754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.291789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.292010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.292049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.292234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.292272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.292452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.292487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.292675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.292710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.292859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.292903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.293078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.700 [2024-07-26 16:41:36.293141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.700 qpair failed and we were unable to recover it. 00:36:16.700 [2024-07-26 16:41:36.293379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.293417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.293632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.293666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.293870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.293908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.294134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.294174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.294378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.294415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.294620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.294653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.294851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.294889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.295094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.295139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.295341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.295379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.295554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.295588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.295812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.295849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.296036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.296082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.296325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.296366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.296591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.296625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.296813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.296850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.297054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.297095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.297312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.297356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.297580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.297614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.297831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.297870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.298124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.298198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.298413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.298451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.298657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.298691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.298894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.298931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.299209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.299259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.299487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.299526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.299697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.299732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.299911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.299946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.300136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.300174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.300398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.300436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.300638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.300682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.300856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.300891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.301145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.301211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.301423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.301459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.301648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.301683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.301900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.301935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.302216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.302254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.302491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.302530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.701 [2024-07-26 16:41:36.302748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.701 [2024-07-26 16:41:36.302783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.701 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.302986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.303024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.303263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.303301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.303544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.303578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.303784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.303828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.304034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.304080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.304257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.304294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.304518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.304556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.304731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.304767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.304950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.304984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.305203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.305241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.305466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.305505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.305671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.305705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.305930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.305968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.306126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.306164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.306359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.306396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.306605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.306640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.306865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.306903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.307135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.307169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.307366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.307404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.307580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.307616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.307806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.307848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.308043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.308088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.308254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.308291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.308498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.308534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.308760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.308799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.309010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.309044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.309273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.309310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.309511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.309545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.309735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.309773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.309975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.310013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.310245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.310283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.310516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.310550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.310775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.310812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.311009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.311048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.311269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.311306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.311541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.311575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.702 [2024-07-26 16:41:36.311764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.702 [2024-07-26 16:41:36.311803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.702 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.312003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.312040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.312298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.312335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.312565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.312599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.312796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.312834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.313025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.313072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.313270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.313308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.313509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.313543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.313724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.313757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.313947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.313985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.314181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.314219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.314416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.314451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.314678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.314716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.314906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.314945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.315125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.315163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.315359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.315395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.315593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.315631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.315870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.315904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.316102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.316153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.316356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.316391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.316603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.316641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.316874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.316912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.317122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.317160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.317383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.317417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.317622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.317657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.317901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.317939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.318138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.318176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.318356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.318400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.318619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.318657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.318849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.318887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.319144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.319187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.319370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.319415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.319609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.319648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.319881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.319914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.320110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.320148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.320335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.320369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.320567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.320605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.703 [2024-07-26 16:41:36.320829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.703 [2024-07-26 16:41:36.320866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.703 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.321089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.321132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.321296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.321339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.321490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.321526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.321673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.321708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.321885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.321920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.322122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.322157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.322381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.322427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.322624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.322659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.322860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.322911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.323136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.323171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.323392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.323430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.323626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.323660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.323842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.323876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.324114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.324153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.324383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.324421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.324790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.324856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.325100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.325137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.325342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.325376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.325580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.325618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.325906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.325945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.326110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.326148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.326347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.326381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.326606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.326644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.326866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.326904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.327108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.327146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.327346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.327380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.327581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.327619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.327863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.327898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.328116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.328154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.328355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.328390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.328593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.328631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.328852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.328890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.329088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.329130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.329364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.329399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.329579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.329616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.329811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.329849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.330028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.704 [2024-07-26 16:41:36.330073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.704 qpair failed and we were unable to recover it. 00:36:16.704 [2024-07-26 16:41:36.330259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.330293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.330465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.330499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.330679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.330713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.330931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.330969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.331138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.331177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.331380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.331418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.331654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.331713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.331934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.331972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.332179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.332213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.332415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.332454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.332813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.332873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.333101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.333137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.333318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.333358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.333526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.333560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.333706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.333740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.333939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.333978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.334182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.334221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.334376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.334428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.334732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.334800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.335036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.335084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.335282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.335338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.335566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.335604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.335960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.336027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.336282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.336315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.336469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.336503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.336652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.336690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.336850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.336884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.337191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.337230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.337428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.337470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.337706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.337744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.337957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.705 [2024-07-26 16:41:36.337995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.705 qpair failed and we were unable to recover it. 00:36:16.705 [2024-07-26 16:41:36.338201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.338236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.338422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.338456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.338635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.338670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.338882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.338920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.339119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.339154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.339354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.339389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.339589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.339627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.339829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.339867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.340065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.340113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.340327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.340362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.340534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.340572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.340918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.340983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.341213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.341246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.341428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.341464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.341636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.341670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.341865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.341903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.342125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.342164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.342364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.342399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.342642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.342676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.343029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.343076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.343279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.343315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.343517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.343550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.343763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.343815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.344008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.344046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.344289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.344333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.344580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.344617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.344821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.344858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.345056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.345107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.345302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.345340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.345562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.345596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.345843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.345881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.346076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.346114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.346313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.346351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.346547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.346595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.346820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.346869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.347030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.347081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.347250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.706 [2024-07-26 16:41:36.347285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.706 qpair failed and we were unable to recover it. 00:36:16.706 [2024-07-26 16:41:36.347503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.347553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.347737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.347788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.347993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.348030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.348208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.348247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.348436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.348469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.348710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.348749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.348977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.349016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.349213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.349252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.349439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.349472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.349663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.349697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.349957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.349995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.350209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.350244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.350428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.350462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.350693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.350731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.350888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.350926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.351134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.351170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.351352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.351396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.351593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.351632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.351831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.351869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.352095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.352133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.352354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.352388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.352590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.352629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.352943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.353002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.353283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.353318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.353577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.353611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.353790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.353825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.354088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.354126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.354322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.354356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.354548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.354586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.354837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.354874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.355094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.355129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.355376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.355414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.355616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.355666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.355850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.355900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.356124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.356163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.356333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.356372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.356591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.707 [2024-07-26 16:41:36.356625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.707 qpair failed and we were unable to recover it. 00:36:16.707 [2024-07-26 16:41:36.356856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.356894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.357071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.357110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.357310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.357347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.357599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.357634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.357808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.357845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.358074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.358112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.358341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.358380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.358549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.358584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.358780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.358819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.359073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.359108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.359295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.359346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.359556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.359590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.359796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.359834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.360008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.360046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.360261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.360300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.360536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.360570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.360762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.360796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.361017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.361056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.361274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.361311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.361480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.361516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.361721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.361759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.361929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.361967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.362195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.362230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.362431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.362464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.362703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.362741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.362914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.362951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.363171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.363209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.363380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.363421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.363617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.363655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.363829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.363868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.364089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.364127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.364317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.364357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.364533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.364571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.364931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.364992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.365180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.365219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.365433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.365467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.365647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.708 [2024-07-26 16:41:36.365681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.708 qpair failed and we were unable to recover it. 00:36:16.708 [2024-07-26 16:41:36.365900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.365938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.366132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.366171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.366391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.366425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.366645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.366679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.366866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.366901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.367151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.367190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.367388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.367432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.367629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.367667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.367841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.367878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.368087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.368122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.368296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.368331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.368528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.368566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.368890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.368957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.369164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.369204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.369382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.369417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.369607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.369644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.369837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.369875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.370072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.370110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.370282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.370316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.370516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.370568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.370907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.370947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.371148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.371187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.371378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.371413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.371627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.371665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.371892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.371930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.372156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.372195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.372427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.372462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.372638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.372676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.372910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.372952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.373160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.373212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.373387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.373447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.373661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.373699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.373873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.373911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.374142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.374180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.374394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.374433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.374636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.374675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.374879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.709 [2024-07-26 16:41:36.374917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.709 qpair failed and we were unable to recover it. 00:36:16.709 [2024-07-26 16:41:36.375154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.375189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.375370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.375404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.375604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.375655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.375845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.375882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.376114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.376149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.376581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.376638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.376839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.376876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.377107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.377145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.377338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.377376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.377571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.377606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.377834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.377873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.378050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.378107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.378326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.378359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.378502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.378535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.378764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.378802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.379025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.379074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.379267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.379304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.379515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.379550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.379774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.379812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.380004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.380042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.380242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.380280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.380507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.380541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.380739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.380777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.380971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.381009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.381227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.381261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.381437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.381471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.381697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.381735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.381910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.381948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.382131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.382170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.382347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.382382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.382580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.382619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.382885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.382924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.383144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.710 [2024-07-26 16:41:36.383179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.710 qpair failed and we were unable to recover it. 00:36:16.710 [2024-07-26 16:41:36.383333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.383377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.383579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.383631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.383840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.383874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.384103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.384141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.384299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.384338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.384562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.384600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.384846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.384884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.385118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.385153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.385327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.385362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.385535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.385570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.385763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.385813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.385977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.386015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.386218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.386253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.386474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.386512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.386822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.386878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.387097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.387136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.387336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.387370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.387524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.387558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.387862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.387927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.388140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.388179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.388393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.388427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.388625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.388663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.389012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.389079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.389307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.389345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.389565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.389599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.389756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.389791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.390009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.390047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.390216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.390255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.390451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.390485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.390679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.390717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.390913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.390951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.391157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.391196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.391389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.391423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.391647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.391685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.391878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.391916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.392116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.392150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.392329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.392363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.711 qpair failed and we were unable to recover it. 00:36:16.711 [2024-07-26 16:41:36.392587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.711 [2024-07-26 16:41:36.392645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.392869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.392907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.393099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.393137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.393306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.393341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.393575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.393613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.393783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.393820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.394049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.394091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.394249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.394288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.394519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.394576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.394947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.395005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.395193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.395234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.395391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.395425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.395600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.395634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.395845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.395898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.396136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.396172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.396379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.396424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.396619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.396658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.396830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.396865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.397071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.397110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.397309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.397345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.397564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.397602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.397910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.397982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.398175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.398215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.398389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.398423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.398609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.398644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.398867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.398919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.399170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.399206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.399390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.399449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.399687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.399726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.399931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.399966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.400152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.400192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.400414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.400450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.400644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.400682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.400880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.400933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.401125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.401165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.401402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.401437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.401620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.401659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.401875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.712 [2024-07-26 16:41:36.401910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.712 qpair failed and we were unable to recover it. 00:36:16.712 [2024-07-26 16:41:36.402111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.402150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.402333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.402369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.402524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.402558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.402760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.402805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.402987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.403030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.403265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.403300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.403531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.403571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.403885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.403959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.404152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.404191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.404401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.404442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.404644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.404681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.404861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.404895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.405054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.405097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.405298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.405333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.405516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.405550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.405751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.405790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.405994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.406033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.406267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.406302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.406507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.406549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.406720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:36:16.713 [2024-07-26 16:41:36.406997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.407052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.407277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.407314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.407543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.407581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.407782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.407826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.408024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.408064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.408258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.408292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.408528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.408562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.408766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.408800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.409003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.409040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.409245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.409279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.409436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.409470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.409652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.409686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.409886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.409925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.410096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.410131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.410303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.410354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.410521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.410559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.410786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.410820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.713 [2024-07-26 16:41:36.411001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.713 [2024-07-26 16:41:36.411039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.713 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.411222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.411257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.411403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.411438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.411592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.411627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.411806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.411840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.412022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.412056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.412248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.412283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.412513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.412570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.412774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.412808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.413006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.413044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.413271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.413306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.413490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.413524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.413730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.413782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.413958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.413996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.414191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.414226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.414447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.414485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.414809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.414865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.415101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.415135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.415294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.415329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.415564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.415601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.415779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.415823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.416025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.416070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.416256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.416291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.416480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.416514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.416680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.416717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.416880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.416918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.417103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.417142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.417319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.417353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.417502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.417536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.417706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.417740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.417916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.417954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.418158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.418193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.418382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.714 [2024-07-26 16:41:36.418416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.714 qpair failed and we were unable to recover it. 00:36:16.714 [2024-07-26 16:41:36.418620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.418655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.418838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.418871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.419097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.419132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.419316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.419350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.419497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.419531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.419696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.419731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.419903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.419937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.420149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.420188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.420382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.420417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.420596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.420634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.420800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.420838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.421022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.421057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.421267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.421305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.421490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.421524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.421682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.421716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.421882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.421919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.422117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.422155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.422335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.422370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.422578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.422613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.422799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.422837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.423051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.423096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.423303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.423341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.423515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.423552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.423732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.423767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.423945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.423979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.424228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.424266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.424474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.424508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.424711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.424748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.424914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.424952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.425139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.425174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.425327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.425361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.425515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.425549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.425710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.425746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.425944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.425987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.426177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.426215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.715 [2024-07-26 16:41:36.426414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.715 [2024-07-26 16:41:36.426448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.715 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.426607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.426641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.426808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.426842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.426998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.427032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.427252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.427309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.427520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.427562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.427756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.427793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.427966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.428010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.428227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.428264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.428440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.428476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.428651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.428689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.428879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.428914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.429075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.429111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.429307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.429362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.429550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.429586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.429766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.429801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.429964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.430003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.430222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.430258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.992 qpair failed and we were unable to recover it. 00:36:16.992 [2024-07-26 16:41:36.430441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.992 [2024-07-26 16:41:36.430476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.430707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.430745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.430943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.430980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.431171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.431205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.431372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.431407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.431583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.431621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.431799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.431833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.432044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.432122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.432340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.432382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.432575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.432611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.432787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.432829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.433050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.433096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.433275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.433311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.433512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.433551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.433714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.433753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.433952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.433987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.434215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.434283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.434497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.434533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.434711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.434745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.434949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.435009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.435191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.435230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.435391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.435424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.435653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.435711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.435884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.435921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.436113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.436147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.436327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.436361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.436536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.436573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.436768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.436801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.436944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.436977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.437186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.437223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.437454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.437487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.437737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.437796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.437996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.438029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.438198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.438232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.438401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.438455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.438652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.438689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.438852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.438885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.993 [2024-07-26 16:41:36.439070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.993 [2024-07-26 16:41:36.439103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.993 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.439339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.439376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.439577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.439610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.439765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.439798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.439995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.440044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.440270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.440304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.440477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.440513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.440704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.440741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.440908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.440942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.441165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.441202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.441375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.441412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.441616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.441649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.441847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.441884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.442109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.442147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.442320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.442353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.442574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.442611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.442826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.442862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.443074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.443107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.443267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.443300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.443515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.443552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.443737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.443770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.443983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.444019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.444205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.444239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.444418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.444455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.444632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.444664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.444870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.444904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.445079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.445113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.445290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.445332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.445526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.445563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.445737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.445771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.445958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.445994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.446198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.446235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.446398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.446430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.446606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.446643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.446846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.446882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.447055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.447096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.447264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.447300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.994 [2024-07-26 16:41:36.447535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.994 [2024-07-26 16:41:36.447568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.994 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.447719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.447752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.447975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.448011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.448231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.448268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.448475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.448509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.448683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.448715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.448890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.448934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.449083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.449117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.449272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.449306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.449486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.449519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.449765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.449798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.449988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.450024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.450216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.450250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.450436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.450468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.450670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.450706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.450902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.450938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.451110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.451144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.451313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.451350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.451584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.451618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.451794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.451826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.452022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.452056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.452265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.452303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.452490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.452526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.452726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.452760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.452948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.452981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.453202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.453239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.453409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.453447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.453627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.453664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.453838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.453875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.454047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.454090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.454259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.454297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.454495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.454532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.454736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.454770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.454934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.454970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.455142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.455179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.455353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.455387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.455585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.455622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.455818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.995 [2024-07-26 16:41:36.455854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.995 qpair failed and we were unable to recover it. 00:36:16.995 [2024-07-26 16:41:36.456022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.456055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.456382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.456415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.456597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.456634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.456833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.456867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.457045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.457096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.457274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.457307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.457491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.457525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.457726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.457764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.457961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.457998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.458198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.458232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.458435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.458474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.458643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.458680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.458885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.458919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.459116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.459153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.459350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.459387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.459592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.459625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.459795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.459831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.460007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.460041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.460204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.460238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.460411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.460448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.460641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.460677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.460876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.460909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.461112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.461151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.461316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.461352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.461524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.461557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.461710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.461744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.461925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.461958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.462137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.462171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.462366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.462408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.462616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.462653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.462857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.462890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.463072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.463110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.463285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.463333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.463536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.463569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.463738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.463775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.463990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.464023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.464206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.464240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.996 [2024-07-26 16:41:36.464390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.996 [2024-07-26 16:41:36.464423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.996 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.464645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.464682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.464866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.464899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.465088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.465148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.465348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.465385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.465570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.465603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.465755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.465788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.465990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.466026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.466257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.466291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.466495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.466532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.466693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.466730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.466903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.466936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.467108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.467145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.467343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.467380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.467590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.467623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.467795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.467831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.468012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.468046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.468237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.468271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.468476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.468513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.468716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.468753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.468959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.468992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.469198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.469237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.469443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.469480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.469677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.469710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.469923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.469960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.470151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.470189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.470371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.470404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.470613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.470649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.470865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.470902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.471088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.471121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.471276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.471309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.997 qpair failed and we were unable to recover it. 00:36:16.997 [2024-07-26 16:41:36.471511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.997 [2024-07-26 16:41:36.471552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.471730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.471763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.471955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.471988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.472192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.472229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.472394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.472427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.472648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.472685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.472886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.472923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.473108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.473142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.473361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.473397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.473627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.473664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.473844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.473878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.474099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.474137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.474357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.474400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.474621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.474654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.474850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.474886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.475078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.475115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.475291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.475325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.475546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.475583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.475785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.475822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.476047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.476088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.476309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.476346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.476540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.476573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.476773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.476806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.476999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.477037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.477232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.477270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.477475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.477509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.477685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.477719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.477946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.477982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.478189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.478253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.478427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.478465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.478688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.478724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.478945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.478978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.479207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.479244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.479451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.479485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.479664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.479697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.479919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.479956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.480153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.480190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.998 [2024-07-26 16:41:36.480372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.998 [2024-07-26 16:41:36.480405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.998 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.480634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.480671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.480847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.480883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.481054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.481099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.481291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.481327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.481527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.481560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.481759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.481791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.482023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.482055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.482265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.482302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.482571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.482614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.482801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.482838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.483046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.483101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.483298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.483332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.483535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.483572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.483769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.483805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.483986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.484020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.484227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.484264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.484471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.484508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.484731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.484765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.484920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.484953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.485133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.485168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.485358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.485391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.485596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.485633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.485840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.485877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.486070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.486104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.486258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.486291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.486444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.486478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.486654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.486687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.486910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.486947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.487175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.487212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.487432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.487466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.487642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.487679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.487907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.487943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.488153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.488187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.488347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.488384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.488567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.488604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.488769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.488802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:16.999 [2024-07-26 16:41:36.489002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.999 [2024-07-26 16:41:36.489039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:16.999 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.489281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.489318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.489503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.489535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.489688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.489722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.489948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.489985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.490184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.490218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.490419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.490459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.490652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.490688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.490873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.490907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.491083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.491117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.491300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.491337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.491511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.491544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.491751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.491801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.491993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.492030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.492262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.492295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.492495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.492532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.492734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.492768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.492920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.492952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.493100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.493165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.493329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.493366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.493574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.493607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.493838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.493874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.494077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.494114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.494316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.494349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.494552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.494586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.494764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.494797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.494999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.495032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.495270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.495307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.495485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.495521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.495728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.495761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.495926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.495959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.496154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.496192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.496398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.496431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.496686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.496744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.496983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.497023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.497215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.497250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.497453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.497504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.497710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.000 [2024-07-26 16:41:36.497748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.000 qpair failed and we were unable to recover it. 00:36:17.000 [2024-07-26 16:41:36.497944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.497990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.498235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.498271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.498443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.498476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.498698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.498732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.498989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.499023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.499220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.499253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.499429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.499462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.499744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.499801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.500034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.500085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.500261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.500294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.500496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.500546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.500700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.500737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.500961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.500994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.501194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.501232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.501449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.501485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.501673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.501706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.501896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.501933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.502138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.502175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.502373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.502407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.502578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.502616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.502795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.502827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.503029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.503070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.503281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.503318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.503513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.503551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.503729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.503763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.503944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.503977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.504175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.504213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.504422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.504455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.504771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.504836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.505033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.505082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.001 [2024-07-26 16:41:36.505292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.001 [2024-07-26 16:41:36.505325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.001 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.505533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.505566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.505744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.505777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.505970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.506008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.506248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.506298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.506552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.506606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.506816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.506852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.507082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.507134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.507367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.507405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.507637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.507670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.507955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.508014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.508252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.508286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.508495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.508529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.508746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.508804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.508995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.509031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.509237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.509270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.509446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.509494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.509669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.509704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.509891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.509926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.510086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.510121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.510319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.510388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.510631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.510668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.510901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.510939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.511129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.511164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.511325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.511360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.511601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.511655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.511870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.511911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.512096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.512132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.512337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.512386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.512621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.512679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.512874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.512909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.513139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.513176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.513354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.513403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.513614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.513650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.514008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.514082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.514280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.514313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.514508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.514541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.002 [2024-07-26 16:41:36.514777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.002 [2024-07-26 16:41:36.514835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.002 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.515035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.515077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.515280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.515313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.515541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.515578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.515905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.515962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.516195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.516229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.516395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.516432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.516629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.516678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.516900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.516938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.517141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.517191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.517364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.517416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.517623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.517655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.517935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.518008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.518222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.518255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.518455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.518487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.518827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.518885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.519123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.519156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.519342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.519376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.519579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.519617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.519930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.519992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.520225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.520260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.520468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.520505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.520819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.520874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.521052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.521094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.521271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.521305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.521517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.521554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.521736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.521769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.521932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.521969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.522198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.522232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.522418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.522451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.522727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.522801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.523017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.523054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.523233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.523267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.523512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.523566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.523901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.523961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.524152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.524187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.524366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.524399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.003 [2024-07-26 16:41:36.524676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.003 [2024-07-26 16:41:36.524709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.003 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.524885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.524918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.525099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.525134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.525384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.525438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.525676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.525712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.525881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.525920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.526134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.526169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.526377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.526411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.526602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.526640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.526857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.526891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.527096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.527130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.527306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.527363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.527558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.527596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.527790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.527823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.528057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.528134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.528323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.528374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.528552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.528587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.528835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.528892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.529090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.529143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.529295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.529328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.529541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.529598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.529809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.529866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.530074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.530108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.530288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.530321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.530521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.530558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.530766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.530810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.531075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.531143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.531346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.531383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.531587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.531622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.531880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.531914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.532100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.532135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.532313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.532347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.532695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.532753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.533084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.533154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.533332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.533365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.533541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.533574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.533731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.533764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.533942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.533975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.004 qpair failed and we were unable to recover it. 00:36:17.004 [2024-07-26 16:41:36.534182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.004 [2024-07-26 16:41:36.534217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.534447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.534485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.534695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.534729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.534983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.535020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.535234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.535268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.535443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.535477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.535657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.535690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.535910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.535948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.536185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.536220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.536391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.536429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.536624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.536662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.536859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.536894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.537124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.537159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.537340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.537398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.537636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.537671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.537857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.537894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.538112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.538151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.538318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.538352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.538584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.538621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.538814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.538852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.539032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.539079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.539258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.539291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.539475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.539509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.539751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.539786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.539999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.540038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.540241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.540280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.540502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.540536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.540723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.540757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.540949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.540988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.541204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.541240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.541428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.541463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.541651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.541688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.541862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.541897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.542074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.542108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.542314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.542352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.542552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.542586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.542788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.542825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.005 qpair failed and we were unable to recover it. 00:36:17.005 [2024-07-26 16:41:36.543051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.005 [2024-07-26 16:41:36.543093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.543271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.543304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.543475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.543518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.543713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.543751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.543966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.543999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.544200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.544235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.544428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.544466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.544694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.544727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.544927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.544964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.545189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.545228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.545419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.545452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.545726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.545781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.545969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.546007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.546188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.546223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.546432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.546485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.546698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.546738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.546943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.546987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.547147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.547182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.547408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.547446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.547694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.547726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.548004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.548037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.548219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.548254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.548456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.548489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.548810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.548874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.549095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.549132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.549308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.549358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.549562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.549614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.549787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.549826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.550004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.550037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.550269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.550307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.550477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.550514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.550725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.550758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.006 qpair failed and we were unable to recover it. 00:36:17.006 [2024-07-26 16:41:36.550957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.006 [2024-07-26 16:41:36.550990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.551212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.551245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.551458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.551492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.551801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.551858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.552034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.552077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.552278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.552312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.552505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.552542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.552781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.552814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.553016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.553049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.553262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.553299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.553489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.553526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.553704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.553739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.553941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.553993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.554180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.554218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.554435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.554468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.554825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.554882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.555057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.555101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.555303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.555337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.555538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.555575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.555780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.555817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.556009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.556041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.556240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.556278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.556473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.556516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.556712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.556745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.556982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.557019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.557216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.557250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.557398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.557431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.557627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.557676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.557896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.557933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.558138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.558172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.558371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.558407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.558625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.558662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.558861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.558895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.559123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.559161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.559353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.559389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.559567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.559601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.559871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.559927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.007 [2024-07-26 16:41:36.560155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.007 [2024-07-26 16:41:36.560193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.007 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.560458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.560492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.560751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.560818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.561024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.561069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.561273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.561307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.561516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.561549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.561699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.561733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.561913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.561947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.562148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.562186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.562345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.562382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.562580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.562613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.562810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.562847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.563041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.563088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.563285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.563319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.563490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.563529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.563743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.563780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.563980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.564013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.564173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.564208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.564424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.564461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.564680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.564713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.564949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.564982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.565164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.565199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.565436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.565469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.565695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.565732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.565948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.565984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.566184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.566218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.566389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.566426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.566614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.566656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.566856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.566891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.567093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.567131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.567338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.567371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.567573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.567606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.567825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.567859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.568030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.568072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.568252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.568285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.568466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.568499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.568697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.568734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.568944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.008 [2024-07-26 16:41:36.568977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.008 qpair failed and we were unable to recover it. 00:36:17.008 [2024-07-26 16:41:36.569205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.569242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.569445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.569481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.569656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.569690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.569915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.569952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.570196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.570230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.570438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.570472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.570703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.570737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.570971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.571004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.571213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.571247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.571423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.571460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.571623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.571660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.571851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.571885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.572085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.572123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.572319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.572356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.572581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.572614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.572770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.572804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.572956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.573007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.573213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.573247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.573421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.573455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.573653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.573690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.573877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.573911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.574114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.574153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.574352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.574390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.574612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.574646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.574841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.574879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.575100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.575138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.575308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.575342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.575531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.575565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.575809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.575852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.575997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.576035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.576275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.576311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.576536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.576573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.576767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.576800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.576954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.576987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.577198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.577250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.577456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.577489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.577639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.577673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.009 [2024-07-26 16:41:36.577865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.009 [2024-07-26 16:41:36.577902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.009 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.578128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.578162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.578313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.578347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.578522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.578555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.578785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.578819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.579026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.579070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.579293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.579330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.579528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.579561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.579741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.579775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.579962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.579996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.580184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.580218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.580427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.580464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.580630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.580666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.580838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.580872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.581086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.581124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.581309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.581346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.581565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.581598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.581748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.581781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.581955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.582005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.582218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.582253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.582428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.582462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.582691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.582728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.582922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.582957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.583156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.583196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.583401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.583438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.583666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.583699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.583894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.583931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.584124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.584161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.584353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.584387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.584586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.584622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.584813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.584850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.585080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.585114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.585330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.585368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.585545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.585579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.585782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.585816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.586074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.586108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.586283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.586316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.586493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.586527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.010 qpair failed and we were unable to recover it. 00:36:17.010 [2024-07-26 16:41:36.586713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.010 [2024-07-26 16:41:36.586747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.586974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.587011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.587211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.587245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.587464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.587501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.587728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.587761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.587939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.587972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.588168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.588206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.588427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.588460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.588642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.588675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.588825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.588858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.589055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.589100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.589329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.589362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.589574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.589611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.589802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.589839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.590038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.590079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.590305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.590342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.590567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.590604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.590782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.590825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.590966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.591000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.591182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.591216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.591387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.591420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.591620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.591658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.591833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.591870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.592075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.592110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.592310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.592347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.592537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.592574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.592768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.592801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.592988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.593025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.593231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.593265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.593461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.593494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.593695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.593732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.593887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.011 [2024-07-26 16:41:36.593924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.011 qpair failed and we were unable to recover it. 00:36:17.011 [2024-07-26 16:41:36.594147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.594181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.594408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.594445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.594640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.594677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.594855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.594889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.595092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.595130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.595355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.595392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.595618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.595651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.595846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.595883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.596079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.596116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.596313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.596347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.596566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.596603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.596784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.596820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.597012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.597045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.597278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.597315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.597534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.597571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.597772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.597806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.598028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.598082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.598328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.598362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.598538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.598571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.598787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.598824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.599020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.599054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.599240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.599273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.599448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.599481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.599709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.599746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.599944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.599977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.600132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.600166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.600368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.600405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.600629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.600662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.600854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.600892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.601131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.601168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.601363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.601396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.601588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.601625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.601786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.601822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.602019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.602054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.602278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.602312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.602505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.602541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.602743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.602776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.012 [2024-07-26 16:41:36.602920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.012 [2024-07-26 16:41:36.602955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.012 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.603131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.603165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.603365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.603398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.603571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.603607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.603794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.603831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.603991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.604029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.604242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.604279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.604470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.604508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.604707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.604741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.604920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.604955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.605194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.605232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.605479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.605513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.605745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.605793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.606009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.606047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.606278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.606311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.606535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.606572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.606799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.606835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.607031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.607071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.607275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.607312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.607508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.607545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.607748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.607781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.608006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.608043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.608249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.608287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.608461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.608494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.608648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.608682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.608853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.608887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.609057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.609100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.609324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.609362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.609579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.609616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.609779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.609813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.610006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.610043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.610221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.610260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.610468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.610502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.610701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.610738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.610932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.610969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.611138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.611183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.611403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.611441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.611604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.013 [2024-07-26 16:41:36.611641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.013 qpair failed and we were unable to recover it. 00:36:17.013 [2024-07-26 16:41:36.611874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.611907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.612114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.612152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.612371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.612408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.612581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.612614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.612793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.612827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.613027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.613073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.613269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.613303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.613505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.613547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.613749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.613786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.613967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.614001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.614211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.614249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.614453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.614489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.614715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.614748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.614974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.615011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.615196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.615231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.615413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.615447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.615645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.615682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.615869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.615906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.616130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.616164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.616357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.616394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.616554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.616592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.616827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.616860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.617068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.617105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.617310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.617343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.617519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.617553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.617755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.617791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.617959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.617996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.618198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.618232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.618432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.618468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.618689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.618726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.618949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.618983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.619201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.619238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.619439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.619474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.619673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.619707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.619918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.619952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.620119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.620157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.620350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.620383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.014 [2024-07-26 16:41:36.620591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.014 [2024-07-26 16:41:36.620629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.014 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.620820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.620868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.621083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.621118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.621354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.621391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.621582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.621619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.621816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.621850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.622069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.622106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.622326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.622363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.622537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.622574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.622773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.622810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.623004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.623045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.623284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.623316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.623519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.623556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.623770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.623807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.624001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.624035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.624250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.624287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.624488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.624525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.624754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.624787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.624980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.625016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.625204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.625239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.625415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.625449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.625603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.625636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.625807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.625839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.626047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.626090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.626300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.626337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.626536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.626570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.626772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.626806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.627008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.627044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.627247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.627284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.627505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.627539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.627716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.627753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.627944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.627981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.015 qpair failed and we were unable to recover it. 00:36:17.015 [2024-07-26 16:41:36.628196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.015 [2024-07-26 16:41:36.628230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.628404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.628441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.628604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.628642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.628817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.628850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.629074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.629111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.629321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.629370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.629562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.629598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.629801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.629839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.630037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.630085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.630287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.630320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.630539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.630575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.630757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.630812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.630982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.631016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.631205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.631238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.631446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.631500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.631721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.631757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.631931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.631968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.632153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.632188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.632391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.632425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.632633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.632671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.632866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.632903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.633091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.633126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.633305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.633357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.633585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.633619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.633834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.633868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.634088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.634141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.634290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.634323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.634494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.634527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.634684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.634721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.634914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.634952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.635130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.635164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.635340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.635374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.635601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.635635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.635816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.635850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.636029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.636073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.636227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.636261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.636466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.636499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.016 [2024-07-26 16:41:36.636672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.016 [2024-07-26 16:41:36.636709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.016 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.636908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.636945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.637153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.637187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.637383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.637420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.637645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.637683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.637885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.637928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.638108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.638143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.638317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.638350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.638506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.638544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.638758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.638793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.638971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.639005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.639183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.639218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.639394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.639431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.639616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.639652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.639841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.639875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.640103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.640137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.640284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.640319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.640525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.640559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.640755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.640791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.640975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.641012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.641218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.641253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.641463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.641500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.641839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.641896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.642096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.642130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.642288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.642321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.642515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.642552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.642775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.642808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.643006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.643043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.643246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.643280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.643434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.643468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.643660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.643697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.643890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.643927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.644131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.644166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.644325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.644359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.644578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.644615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.644818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.644852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.645021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.645067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.645245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.645278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.017 qpair failed and we were unable to recover it. 00:36:17.017 [2024-07-26 16:41:36.645455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.017 [2024-07-26 16:41:36.645488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.645694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.645745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.645922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.645962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.646165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.646200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.646377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.646419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.646602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.646640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.646799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.646834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.647008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.647045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.647241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.647276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.647459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.647494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.647697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.647741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.647920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.647958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.648136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.648171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.648341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.648375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.648531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.648566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.648797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.648831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.649043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.649093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.649271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.649305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.649509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.649541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.649711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.649748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.649956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.649994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.650199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.650233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.650427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.650464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.650672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.650729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.650934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.650969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.651154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.651188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.651384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.651422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.651629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.651663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.651827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.651862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.652084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.652137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.652295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.652330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.652530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.652581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.652769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.652806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.652990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.653025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.653206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.653239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.653442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.653481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.653701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.653733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.653934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.018 [2024-07-26 16:41:36.653970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.018 qpair failed and we were unable to recover it. 00:36:17.018 [2024-07-26 16:41:36.654192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.654227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.654422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.654455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.654616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.654661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.654862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.654899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.655114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.655149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.655355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.655394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.655590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.655628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.655798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.655832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.656000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.656053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.656241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.656278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.656479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.656513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.656709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.656759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.656962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.657004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.657185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.657220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.657420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.657459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.657660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.657694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.657848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.657882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.658036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.658078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.658250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.658286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.658478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.658512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.658712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.658751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.658921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.658956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.659139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.659174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.659375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.659413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.659583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.659620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.659790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.659824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.659986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.660053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.660231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.660269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.660462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.660495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.660703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.660742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.660905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.660942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.661148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.661182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.661369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.661409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.661627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.661665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.661870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.661903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.662194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.662233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.662430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.662467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.019 qpair failed and we were unable to recover it. 00:36:17.019 [2024-07-26 16:41:36.662644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.019 [2024-07-26 16:41:36.662679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.662865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.662903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.663145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.663183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.663395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.663429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.663600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.663634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.663856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.663906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.664112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.664146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.664306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.664340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.664515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.664550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.664764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.664798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.664978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.665012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.665222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.665257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.665409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.665443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.665657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.665690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.665886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.665924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.666133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.666171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.666375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.666418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.666608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.666645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.666813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.666846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.667031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.667076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.667279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.667328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.667531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.667577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.667782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.667821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.668040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.668088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.668309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.668342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.668502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.668537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.668714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.668747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.668993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.669027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.669252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.669287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.669472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.669531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.669739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.669774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.669952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.020 [2024-07-26 16:41:36.669989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.020 qpair failed and we were unable to recover it. 00:36:17.020 [2024-07-26 16:41:36.670197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.670242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.670437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.670470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.670618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.670652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.670844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.670881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.671072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.671107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.671336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.671373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.671601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.671642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.671797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.671831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.672068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.672105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.672305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.672344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.672526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.672560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.672733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.672765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.672980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.673022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.673249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.673283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.673549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.673587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.673857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.673891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.674045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.674091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.674283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.674321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.674525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.674562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.674752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.674786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.674989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.675031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.675260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.675297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.675493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.675527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.675680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.675719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.675927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.675978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.676157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.676191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.676392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.676430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.676616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.676653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.676860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.676893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.677071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.677109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.677277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.677315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.677515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.677549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.677750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.677788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.677983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.678020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.678262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.678297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.678500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.678538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.678730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.678766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.021 [2024-07-26 16:41:36.678944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.021 [2024-07-26 16:41:36.678979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.021 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.679167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.679206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.679400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.679438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.679640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.679674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.679875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.679913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.680119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.680153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.680331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.680365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.680559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.680595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.680777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.680810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.680998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.681036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.681244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.681281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.681481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.681514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.681669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.681703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.681862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.681896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.682091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.682128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.682328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.682374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.682550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.682583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.682776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.682814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.683021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.683055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.683267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.683304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.683493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.683530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.683721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.683756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.683898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.683931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.684075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.684109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.684287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.684332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.684511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.684545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.684737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.684778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.684981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.685016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.685196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.685230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.685437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.685474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.685647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.685680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.685820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.685870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.686086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.686122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.686301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.686335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.686527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.686564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.686748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.686788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.687005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.687038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.687226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.022 [2024-07-26 16:41:36.687263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.022 qpair failed and we were unable to recover it. 00:36:17.022 [2024-07-26 16:41:36.687441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.687479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.687657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.687690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.687899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.687949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.688152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.688188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.688335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.688368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.688537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.688575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.688785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.688820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.689019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.689052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.689266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.689304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.689499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.689532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.689721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.689754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.689934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.689972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.690144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.690183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.690390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.690423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.690579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.690616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.690830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.690867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.691071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.691105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.691261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.691295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.691502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.691546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.691822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.691855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.692073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.692126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.692312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.692346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.692528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.692561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.692763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.692804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.692989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.693026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.693219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.693253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.693457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.693505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.693771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.693808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.694009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.694048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.694237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.694288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.694480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.694527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.694736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.694769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.694918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.694952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.695181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.695220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.695396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.695431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.695621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.695675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.695907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.695947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.023 [2024-07-26 16:41:36.696130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.023 [2024-07-26 16:41:36.696169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.023 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.696415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.696454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.696650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.696689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.696868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.696903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.697073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.697113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.697332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.697367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.697573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.697607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.697765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.697804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.698036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.698076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.698257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.698291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.698441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.698478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.698662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.698700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.698925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.698963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.699180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.699216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.699411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.699452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.699630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.699665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.699881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.699945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.700193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.700232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.700417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.700453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.700676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.700732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.700896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.700934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.701108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.701143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.701364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.701402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.701608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.701643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.701822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.701856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.702068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.702107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.702305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.702343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.702520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.702553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.702804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.702863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.703070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.703109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.703276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.703310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.703489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.703529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.703724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.703763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.703949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.703987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.704158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.704194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.704415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.704454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.704655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.704689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.704878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.024 [2024-07-26 16:41:36.704912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.024 qpair failed and we were unable to recover it. 00:36:17.024 [2024-07-26 16:41:36.705149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.705191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.705360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.705398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.705552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.705587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.705797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.705839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.706021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.706055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.706214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.706248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.706465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.706505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.706724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.706759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.706957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.706994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.707193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.707228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.707404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.707438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.707670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.707708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.707875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.707929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.708111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.708147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.708322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.708360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.708558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.708596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.708796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.708830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.709031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.709076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.709272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.709310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.709488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.709525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.709685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.709719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.709928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.709981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.710187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.710222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.710403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.710442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.710643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.710678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.710832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.710866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.711042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.711090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.711299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.711337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.711558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.711593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.711788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.711826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.712040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.712089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.712303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.712338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.025 qpair failed and we were unable to recover it. 00:36:17.025 [2024-07-26 16:41:36.712511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.025 [2024-07-26 16:41:36.712549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.712769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.712811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.712981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.713015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.713228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.713266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.713462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.713500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.713693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.713727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.713889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.713927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.714124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.714164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.714368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.714403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.714599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.714637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.714856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.714893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.715135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.715180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.715359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.715398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.715587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.715624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.715806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.715841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.716077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.716115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.716344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.716382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.716580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.716615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.716773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.716807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.717032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.717076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.717301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.717335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.717524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.717562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.717773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.717810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.718036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.718077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.718287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.718326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.718530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.718568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.718784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.718818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.719018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.719056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.719279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.719318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.719544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.719583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.719816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.719853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.720067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.720102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.720279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.720312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.720541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.720578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.720806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.720844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.721042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.721084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.721253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.721291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.026 [2024-07-26 16:41:36.721486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.026 [2024-07-26 16:41:36.721524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.026 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.721726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.721760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.721945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.721979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.722176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.722214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.722433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.722471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.722681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.722715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.722927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.722981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.723202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.723269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.723476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.723515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.723735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.723772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.723950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.723986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.724211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.724250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.724452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.724486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.724659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.724692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.724889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.724927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.725150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.725188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.725388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.725422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.725605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.725638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.725845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.725882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.726087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.726122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.726323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.726360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.726516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.726553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.726775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.726809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.726970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.727006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.727236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.727275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.727481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.727514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.727741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.727779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.728019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.728054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.728273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.728307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.728499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.728536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.728748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.728782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.728981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.729015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.729257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.729291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.729513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.729551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.729782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.729816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.730018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.730056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.730287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.730325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.730493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.027 [2024-07-26 16:41:36.730528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.027 qpair failed and we were unable to recover it. 00:36:17.027 [2024-07-26 16:41:36.730733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.730771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.730987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.731021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.731251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.731286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.731521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.731559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.731755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.731793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.731991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.732026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.732216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.732254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.732454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.732491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.732679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.732713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.732886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.732920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.733150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.733189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.733389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.733422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.733625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.733663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.733823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.733861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.734091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.734125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.734294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.734331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.734530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.734568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.734791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.734824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.734990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.735029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.735268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.735305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.735491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.735525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.028 [2024-07-26 16:41:36.735692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.028 [2024-07-26 16:41:36.735729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.028 qpair failed and we were unable to recover it. 00:36:17.297 [2024-07-26 16:41:36.735901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-07-26 16:41:36.735938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.297 qpair failed and we were unable to recover it. 00:36:17.297 [2024-07-26 16:41:36.736106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-07-26 16:41:36.736140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.297 qpair failed and we were unable to recover it. 00:36:17.297 [2024-07-26 16:41:36.736348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-07-26 16:41:36.736386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.297 qpair failed and we were unable to recover it. 00:36:17.297 [2024-07-26 16:41:36.736567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-07-26 16:41:36.736602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.297 qpair failed and we were unable to recover it. 00:36:17.297 [2024-07-26 16:41:36.736747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-07-26 16:41:36.736781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.297 qpair failed and we were unable to recover it. 00:36:17.297 [2024-07-26 16:41:36.736960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-07-26 16:41:36.736995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.297 qpair failed and we were unable to recover it. 00:36:17.297 [2024-07-26 16:41:36.737234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-07-26 16:41:36.737269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.297 qpair failed and we were unable to recover it. 00:36:17.297 [2024-07-26 16:41:36.737449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-07-26 16:41:36.737483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.297 qpair failed and we were unable to recover it. 00:36:17.297 [2024-07-26 16:41:36.737683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.737720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.737885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.737923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.738129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.738163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.738337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.738386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.738593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.738632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.738856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.738890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.739092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.739130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.739301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.739338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.739575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.739608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.739857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.739923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.740118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.740155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.740356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.740389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.740767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.740823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.741019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.741057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.741275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.741309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.741528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.741589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.741779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.741821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.742041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.742082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.742321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.742354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.742504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.742538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.742715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.742749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.742948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.742986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.743215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.743250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.743428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.743461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.743644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.743677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.743827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.743860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.744031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.744070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.744240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.744273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.744448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.744481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.744668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.744702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.744933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.744970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.745199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.745233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.745410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.745443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.745689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.745744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.745960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.745997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.746211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.746245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.746389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.746422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.746597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.746631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.746800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.746833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.747071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.747105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.747310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.747361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.747564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.747598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.747833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.747871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.748096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.748134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.748333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.748366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.748522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.748555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.748775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.748811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.748988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.749021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.749180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.749215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.749419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.749456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.749675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.749709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.749916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.749950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.750184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.750222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.750439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.750472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.750838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.750909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.751132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.751170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.751370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.751403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.751645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.751703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.751896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.751933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.752093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.752127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.752349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.752385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.752566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.752603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.752826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.752859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.753069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.753106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.753310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.753343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.753518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-07-26 16:41:36.753551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.298 qpair failed and we were unable to recover it. 00:36:17.298 [2024-07-26 16:41:36.753731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.753764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.753974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.754011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.754217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.754251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.754422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.754456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.754689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.754726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.754971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.755008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.755221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.755255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.755451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.755505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.755699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.755732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.755953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.755990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.756178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.756216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.756432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.756465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.756730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.756788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.757010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.757043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.757231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.757265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.757488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.757525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.757720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.757754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.757930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.757967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.758192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.758226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.758420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.758457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.758630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.758663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.758862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.758911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.759093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.759127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.759308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.759341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.759609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.759646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.759865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.759903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.760098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.760131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.760296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.760335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.760595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.760628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.760803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.760836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.761034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.761076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.761271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.761307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.761505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.761538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.761704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.761737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.761895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.761933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.762136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.762170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.762319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.762354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.762550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.762589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.762760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.762793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.762998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.763035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.763272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.763306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.763538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.763572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.763789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.763827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.764049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.764094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.764287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.764320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.764469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.764503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.764709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.764761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.764992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.765025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.765218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.765256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.765453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.765491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.765696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.765730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.765912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.765945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.766147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.766185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.766449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.766483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.766716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.766753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.766946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.766983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.767202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.767236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.767437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.767480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.767674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.767711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.767883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.767916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.768102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.768139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.768330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.768367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.768532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.768565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.768735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.768772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.768958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.768996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.769192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.769226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.769424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.769461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.299 [2024-07-26 16:41:36.769682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-07-26 16:41:36.769719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.299 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.769913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.769946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.770132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.770170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.770360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.770398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.770632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.770676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.770893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.770930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.771163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.771197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.771351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.771385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.771529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.771563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.771713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.771746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.771955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.772005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.772241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.772274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.772490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.772524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.772698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.772731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.772954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.772991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.773189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.773223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.773390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.773423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.773653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.773689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.773889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.773939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.774135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.774170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.774371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.774407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.774604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.774641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.774852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.774885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.775085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.775119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.775330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.775384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.775576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.775610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.775810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.775846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.776067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.776118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.776320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.776353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.776587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.776623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.776828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.776869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.777094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.777127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.777303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.777339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.777534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.777570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.777790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.777823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.777994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.778032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.778235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.778272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.778440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.778474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.778695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.778732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.778952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.778988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.779206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.779240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.779443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.779481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.779660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.779697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.779887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.779920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.780120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.780157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.780361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.780399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.780611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.780645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.780818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.780851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.781081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.781118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.781307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.781346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.781607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.781645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.781866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.781902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.782102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.782136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.782360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.782397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.782587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.782624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.782826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.782860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.783012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.783045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.783268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.783305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.783507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.783541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.300 [2024-07-26 16:41:36.783742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.300 [2024-07-26 16:41:36.783779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.300 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.783967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.784004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.784203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.784237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.784445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.784482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.784702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.784739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.784936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.784969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.785231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.785268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.785485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.785522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.785729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.785762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.785982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.786029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.786236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.786270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.786465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.786505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.786700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.786737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.786911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.786945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.787147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.787181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.787435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.787468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.787611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.787645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.787846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.787879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.788148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.788185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.788409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.788442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.788590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.788624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.788816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.788854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.789052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.789109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.789290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.789323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.789502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.789537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.789728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.789765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.789961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.789994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.790168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.790205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.790427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.790464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.790668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.790702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.790858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.790891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.791093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.791144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.791349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.791382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.791590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.791626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.791851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.791884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.792064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.792098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.792298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.792334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.792505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.792539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.792698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.792732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.792938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.792976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.793192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.793229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.793398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.793431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.793595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.793632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.793826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.793863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.794054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.794095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.794292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.794329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.794495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.794532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.794725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.794759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.794922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.794959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.795132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.795170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.795350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.795383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.795579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.795620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.795814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.795851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.796116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.796150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.796378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.796414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.796607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.796643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.796842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.796875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.797152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.797189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.797453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.797489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.797719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.797752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.797986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.798023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.798228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.798262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.798441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.798474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.798615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.798648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.798870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.798906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.799090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.799124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.799279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.799312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.799520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.799556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.799774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.799807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.800008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.800041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.301 qpair failed and we were unable to recover it. 00:36:17.301 [2024-07-26 16:41:36.800249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.301 [2024-07-26 16:41:36.800301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.800505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.800539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.800793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.800831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.801068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.801116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.801291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.801325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.801531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.801568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.801794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.801831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.801998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.802032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.802257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.802295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.802526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.802563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.802787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.802820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.803052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.803098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.803289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.803327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.803519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.803553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.803746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.803783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.803999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.804035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.804267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.804301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.804481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.804514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.804744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.804781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.804968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.805001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.805204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.805241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.805436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.805477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.805709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.805743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.805927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.805960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.806229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.806266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.806464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.806497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.806755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.806792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.806984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.807022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.807207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.807242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.807465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.807502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.807673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.807711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.807917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.807950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.808156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.808194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.808384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.808421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.808621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.808654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.808851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.808888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.809109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.809143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.809342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.809375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.809550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.809587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.809773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.809809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.810007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.810040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.810229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.810263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.810486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.810523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.810787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.810820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.811028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.811073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.811265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.811302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.811502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.811535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.811737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.811780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.812004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.812038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.812215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.812249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.812442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.812479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.812675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.812711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.812884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.812918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.813117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.813155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.813377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.813414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.813588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.813622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.813769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.813804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.813982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.814016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.814196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.814230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.814404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.814441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.814633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.814670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.814880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.814918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.815144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.815181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.815408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.815442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.815589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.815622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.302 [2024-07-26 16:41:36.815774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.302 [2024-07-26 16:41:36.815807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.302 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.816037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.816076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.816280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.816323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.816525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.816562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.816781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.816818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.817017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.817050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.817265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.817298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.817473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.817506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.817691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.817724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.817986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.818023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.818249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.818283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.818482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.818515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.818737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.818773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.818993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.819029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.819274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.819308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.819511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.819545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.819749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.819786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.819979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.820013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.820196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.820230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.820408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.820445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.820668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.820701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.820902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.820936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.821115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.821154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.821381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.821414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.821607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.821644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.821867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.821904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.822096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.822129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.822309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.822347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.822552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.822585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.822753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.822786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.822973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.823010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.823193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.823226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.823401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.823435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.823633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.823670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.823860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.823897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.824118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.824152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.824379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.824420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.824639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.824676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.824886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.824920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.825098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.825132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.825319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.825357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.825555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.825588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.825753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.825790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.826012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.826049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.826253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.826287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.826486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.826524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.826723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.826760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.826930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.826964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.827187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.827224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.827441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.827478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.827674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.827707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.827937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.827973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.828143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.828182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.828388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.828421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.828614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.828651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.828857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.828890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.829093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.829127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.829302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.829340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.829540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.829573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.829721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.829755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.829954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.829992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.830191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.303 [2024-07-26 16:41:36.830228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.303 qpair failed and we were unable to recover it. 00:36:17.303 [2024-07-26 16:41:36.830453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.830486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.830660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.830697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.830913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.830950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.831133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.831167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.831342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.831405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.831604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.831641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.831835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.831869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.832048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.832089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.832279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.832316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.832542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.832575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.832837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.832874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.833141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.833180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.833382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.833415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.833588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.833625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.833853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.833890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.834073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.834106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.834261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.834295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.834484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.834520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.834713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.834746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.834948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.834985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.835195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.835229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.835399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.835432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.835606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.835640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.835818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.835851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.836022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.836056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.836235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.836269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.836465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.836501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.836669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.836702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.836926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.836963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.837158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.837195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.837376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.837409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.837605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.837642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.837864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.837897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.838070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.838104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.838317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.838353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.838536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.838572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.838762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.838795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.838972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.839009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.839221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.839255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.839432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.839466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.839643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.839676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.839910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.839947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.840119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.840155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.840387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.840424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.840617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.840654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.840924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.840958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.841192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.841229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.841402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.841439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.841633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.841672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.841852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.841885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.842146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.842185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.842408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.842442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.842626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.842660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.842844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.842882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.843087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.843127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.843329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.843367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.843561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.843600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.843794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.843828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.844028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.844071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.844262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.844299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.844497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.844531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.844744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.844781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.844975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.845012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.845195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.845229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.304 [2024-07-26 16:41:36.845422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.304 [2024-07-26 16:41:36.845461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.304 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.845654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.845690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.845871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.845906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.846094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.846129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.846311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.846354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.846534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.846569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.846790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.846827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.847048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.847092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.847321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.847354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.847522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.847559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.847754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.847792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.847993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.848027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.848213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.848248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.848458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.848496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.848707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.848741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.848927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.848964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.849140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.849174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.849379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.849412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.849576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.849613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.849807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.849845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.850072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.850106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.850374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.850411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.850643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.850677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.850877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.850911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.851102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.851140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.851334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.851371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.851591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.851625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.851852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.851889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.852073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.852111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.852267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.852299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.852477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.852514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.852684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.852717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.852888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.852921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.853093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.853127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.853333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.853370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.853565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.853598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.853780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.853813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.853994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.854027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.854236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.854269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.854461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.854498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.854659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.854697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.854865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.854898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.855120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.855158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.855382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.855415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.855598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.855632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.855809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.855846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.856047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.856090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.856298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.856332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.856509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.856546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.856741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.856777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.857005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.857039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.857244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.857281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.857476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.857513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.857721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.857755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.857930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.857963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.858176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.858213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.858385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.858419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.858653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.858690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.858884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.858922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.859134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.859168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.859344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.859378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.859586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.859623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.305 [2024-07-26 16:41:36.859852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.305 [2024-07-26 16:41:36.859886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.305 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.860089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.860126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.860394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.860431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.860633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.860666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.860832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.860869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.861103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.861141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.861337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.861379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.861557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.861590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.861806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.861848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.862078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.862111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.862308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.862346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.862574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.862612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.862814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.862847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.863020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.863053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.863238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.863272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.863472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.863505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.863678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.863716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.863943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.863976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.864164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.864198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.864390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.864428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.864649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.864686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.864878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.864912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.865151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.865188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.865360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.865397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.865574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.865607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.865882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.865920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.866142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.866180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.866411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.866445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.866646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.866684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.866842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.866879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.867053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.867097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.867294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.867332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.867536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.867570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.867739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.867773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.867969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.868006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.868286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.868321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.868525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.868559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.868747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.868784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.868983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.869017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.869196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.869230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.869451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.869488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.869683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.869721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.869937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.869970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.870165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.870203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.870390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.870427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.870622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.870655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.870875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.870912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.871107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.871146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.871374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.871407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.871587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.871624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.871825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.871862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.872066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.872100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.872271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.872308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.872505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.872542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.872765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.872799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.872974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.873011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.873240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.873274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.873480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.873514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.873762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.873796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.873993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.874030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.874231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.874266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.874440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.874474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.874698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.874735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.874909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.874943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.875094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.875129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.875321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.875359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.306 qpair failed and we were unable to recover it. 00:36:17.306 [2024-07-26 16:41:36.875560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.306 [2024-07-26 16:41:36.875595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.875737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.875772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.875964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.876002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.876248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.876282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.876483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.876544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.876728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.876766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.876985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.877018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.877204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.877238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.877411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.877445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.877622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.877660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.877861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.877898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.878093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.878130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.878354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.878388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.878567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.878601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.878812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.878849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.879048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.879090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.879318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.879355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.879550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.879586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.879794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.879828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.880052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.880097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.880316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.880352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.880551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.880584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.880804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.880841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.881015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.881053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.881290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.881324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.881543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.881580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.881754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.881791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.881984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.882018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.882251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.882288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.882482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.882519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.882730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.882764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.882946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.882980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.883153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.883187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.883361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.883395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.883600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.883638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.883835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.883872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.884082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.884116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.884274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.884307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.884531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.884568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.884738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.884772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.884965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.885004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.885214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.885248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.885422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.885456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.885611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.885645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.885811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.885844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.886048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.886088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.886254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.886288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.886501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.886535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.886739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.886773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.886962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.887000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.887160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.887195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.887370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.887404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.887574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.887611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.887796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.887833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.888036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.888078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.888288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.888325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.888514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.888551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.888747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.888781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.889003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.889040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.889281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.889318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.889524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.889557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.889754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.889792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.890016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.890053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.890273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.890307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.890527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.890564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.890790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.890827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.307 qpair failed and we were unable to recover it. 00:36:17.307 [2024-07-26 16:41:36.890998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.307 [2024-07-26 16:41:36.891031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.891238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.891288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.891515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.891559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.891735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.891769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.891970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.892007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.892218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.892252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.892405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.892439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.892641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.892692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.892888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.892927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.893157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.893190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.893419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.893456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.893623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.893660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.893858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.893891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.894094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.894133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.894357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.894394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.894570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.894604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.894790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.894825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.895015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.895050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.895241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.895274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.895469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.895506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.895671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.895708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.895871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.895905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.896095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.896138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.896340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.896382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.896565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.896601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.896753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.896801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.896982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.897019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.897202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.897236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.897413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.897470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.897702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.897736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.897912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.897947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.898099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.898135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.898337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.898374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.898627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.898673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.898893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.898930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.899110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.899146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.899322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.899356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.899574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.899611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.899817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.899851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.900055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.900108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.900297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.900334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.900525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.900562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.900780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.900823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.901021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.901064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.901314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.901349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.901528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.901568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.901784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.901821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.901994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.902031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.902239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.902274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.902452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.902489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.902688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.902727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.902897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.902931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.903163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.903203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.903365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.903402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.903591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.903625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.903821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.903858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.904050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.904094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.904262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.904297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.904470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.904511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.904680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.904717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.308 [2024-07-26 16:41:36.904904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.308 [2024-07-26 16:41:36.904939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.308 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.905095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.905129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.905278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.905312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.905488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.905526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.905708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.905746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.905927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.905964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.906182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.906226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.906423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.906460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.906667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.906700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.906872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.906915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.907126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.907160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.907337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.907370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.907600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.907633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.907783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.907836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.908047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.908087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.908275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.908309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.908511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.908548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.908737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.908782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.908945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.908979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.909161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.909195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.909394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.909431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.909633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.909668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.909906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.909942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.910134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.910185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.910383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.910418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.910571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.910604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.910762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.910829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.911035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.911077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.911285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.911331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.911530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.911567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.911794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.911827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.912026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.912095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.912320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.912354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.912530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.912577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.912756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.912790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.913012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.913048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.913225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.913259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.913462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.913501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.913665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.913702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.913905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.913940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.914151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.914189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.914390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.914424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.914605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.914639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.914815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.914853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.915056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.915100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.915279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.915314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.915515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.915552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.915749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.915786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.915991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.916025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.916199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.916236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.916408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.916445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.916622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.916657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.916825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.916859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.917054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.917098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.917295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.917340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.917560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.917597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.917793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.917830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.918022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.918056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.918244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.918294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.918510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.918549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.918762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.918795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.919032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.919079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.919262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.919300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.919508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.309 [2024-07-26 16:41:36.919542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.309 qpair failed and we were unable to recover it. 00:36:17.309 [2024-07-26 16:41:36.919715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.919752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.919923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.919960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.920142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.920176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.920354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.920391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.920618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.920652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.920811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.920845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.921000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.921078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.921266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.921301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.921453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.921497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.921655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.921690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.921886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.921924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.922153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.922187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.922366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.922405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.922620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.922657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.922856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.922899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.923104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.923152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.923322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.923360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.923564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.923597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.923753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.923787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.923958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.923996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.924177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.924223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.924429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.924466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.924644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.924677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.924877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.924910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.925086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.925125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.925311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.925347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.925530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.925563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.925743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.925776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.925982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.926019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.926216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.926251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.926409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.926443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.926655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.926688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.926845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.926881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.927095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.927132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.927327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.927364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.927538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.927573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.927768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.927805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.927982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.928019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.928206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.928241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.928399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.928436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.928638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.928675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.928855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.928888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.929100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.929145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.929345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.929383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.929550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.929584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.929778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.929815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.929990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.930040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.930231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.930265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.930464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.930502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.930696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.930733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.930965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.930999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.931237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.931302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.931484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.931522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.931735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.931771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.931985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.932024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.932267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.932302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.932485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.932520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.932825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.932884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.933156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.933197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.933402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.933442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.933767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.933827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.934001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.934039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.934256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.934291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.934514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.934555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.310 qpair failed and we were unable to recover it. 00:36:17.310 [2024-07-26 16:41:36.934775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.310 [2024-07-26 16:41:36.934809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.934958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.934992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.935150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.935188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.935398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.935441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.935640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.935674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.935823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.935860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.936023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.936057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.936301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.936335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.936574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.936634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.936857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.936896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.937076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.937110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.937316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.937358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.937533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.937571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.937745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.937779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.937935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.937973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.938167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.938206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.938411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.938445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.938618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.938652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.938840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.938874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.939063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.939098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.939323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.939361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.939589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.939623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.939797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.939831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.940024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.940072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.940269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.940311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.940510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.940544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.940760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.940819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.941016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.941054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.941241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.941275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.941448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.941485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.941679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.941716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.941913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.941948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.942125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.942163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.942338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.942372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.942578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.942613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.942817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.942861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.943028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.943072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.943303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.943341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.943516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.943555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.943756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.943794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.943992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.944029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.944212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.944254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.944418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.944458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.944654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.944690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.944886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.944927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.945092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.945134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.945321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.945358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.945536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.945570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.945736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.945770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.945989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.946023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.946229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.946268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.946438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.946488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.946655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.946692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.946900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.946937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.947144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.947184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.947368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.947403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.947627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.947690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.947884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.947921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.948101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.948136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.948338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.948380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.948579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.948613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.948790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.948827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.311 qpair failed and we were unable to recover it. 00:36:17.311 [2024-07-26 16:41:36.949050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.311 [2024-07-26 16:41:36.949112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.949321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.949364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.949568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.949603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.949883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.949938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.950114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.950153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.950371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.950405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.950732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.950791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.950982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.951024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.951234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.951269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.951447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.951489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.951693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.951731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.951940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.951975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.952149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.952190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.952354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.952397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.952626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.952661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.952834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.952872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.953069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.953109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.953290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.953323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.953546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.953583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.953749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.953791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.953971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.954006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.954213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.954252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.954458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.954497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.954681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.954715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.954926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.954960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.955157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.955195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.955398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.955432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.955756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.955831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.956031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.956079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.956285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.956319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.956515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.956550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.956782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.956818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.957027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.957066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.957292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.957328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.957544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.957580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.957755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.957788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.957964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.957998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.958198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.958236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.958415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.958449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.958715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.958779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.959019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.959098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.959322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.959357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.959566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.959624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.959847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.959880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.960049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.960089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.960272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.960308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.960524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.960561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.960741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.960774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.960921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.960955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.961129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.961163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.961333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.961366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.961561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.961598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.961816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.961853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.962066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.962103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.962274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.962311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.962510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.962545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.962702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.962735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.962936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.962973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.963176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.963214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.963389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.963422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.963633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.963671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.963890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.963927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.964132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.964166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.964365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.964402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.964597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.964633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.964830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.964863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.965029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.965066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.965263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.965300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.965471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.965504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.312 qpair failed and we were unable to recover it. 00:36:17.312 [2024-07-26 16:41:36.965685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.312 [2024-07-26 16:41:36.965719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.965929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.965965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.966143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.966178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.966337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.966371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.966574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.966612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.966781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.966814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.966964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.966997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.967181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.967216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.967389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.967423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.967644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.967701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.967900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.967937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.968123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.968157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.968324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.968374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.968579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.968620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.968787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.968821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.969007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.969041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.969216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.969255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.969459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.969493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.969673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.969707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.969936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.969973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.970201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.970236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.970509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.970563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.970767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.970805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.971039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.971080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.971284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.971330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.971555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.971592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.971789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.971823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.972022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.972069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.972246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.972287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.972492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.972526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.972777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.972836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.973037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.973082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.973257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.973291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.973505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.973541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.973732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.973770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.973947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.973981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.974203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.974241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.974426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.974463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.974644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.974679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.974927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.974961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.975141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.975176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.975392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.975426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.975768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.975840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.976076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.976116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.976320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.976354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.976584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.976620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.976797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.976831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.977010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.977043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.977299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.977349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.977545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.977582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.977733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.977779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.977990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.978038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.978240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.978280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.978495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.978529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.978732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.978769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.978952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.978989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.979171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.979207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.979370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.979406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.979581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.979615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.979789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.979823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.979994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.313 [2024-07-26 16:41:36.980032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.313 qpair failed and we were unable to recover it. 00:36:17.313 [2024-07-26 16:41:36.980212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.980246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.980399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.980432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.980628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.980667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.980832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.980874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.981074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.981109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.981306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.981344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.981562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.981599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.981798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.981832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.982048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.982092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.982246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.982281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.982477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.982511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.982746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.982784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.983013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.983050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.983271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.983306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.983601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.983658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.983819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.983857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.984054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.984096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.984302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.984340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.984531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.984569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.984790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.984824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.985028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.985072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.985269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.985305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.985505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.985539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.985783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.985837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.986115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.986149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.986326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.986370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.986551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.986584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.986805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.986841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.987045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.987088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.987267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.987301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.987525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.987562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.987782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.987815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.988020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.988057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.988260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.988297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.988513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.988547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.988805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.988862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.989079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.989116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.989356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.989389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.989636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.989694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.989919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.989952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.990126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.990159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.990377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.990414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.990595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.990633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.990855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.990893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.991110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.991147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.991393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.991428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.991627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.991661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.991871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.991905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.992135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.992172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.992375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.992409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.992584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.992617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.992850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.992887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.993072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.993106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.993286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.993319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.993531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.993569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.993768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.993802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.993999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.994035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.994217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.994255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.994429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.994464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.994691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.994751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.994956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.994995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.995196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.995230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.995474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.995529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.995740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.995781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.996011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.996046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.996272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.314 [2024-07-26 16:41:36.996309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.314 qpair failed and we were unable to recover it. 00:36:17.314 [2024-07-26 16:41:36.996470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:36.996508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:36.996697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:36.996731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:36.996967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:36.997004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:36.997208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:36.997243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:36.997452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:36.997492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:36.997860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:36.997916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:36.998136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:36.998175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:36.998339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:36.998373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:36.998654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:36.998712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:36.999586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:36.999628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:36.999831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:36.999865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.000044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.000092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.000276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.000312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.000560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.000594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.000953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.001008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.001208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.001243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.001425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.001460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.001687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.001724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.001939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.001973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.002151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.002186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.002346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.002381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.002561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.002594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.002827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.002861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.003090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.003129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.003342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.003375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.003550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.003585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.003886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.003942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.004142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.004183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.004404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.004438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.005142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.005184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.005388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.005427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.005677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.005712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.005917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.005955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.006151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.006191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.006383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.006417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.006637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.006692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.006896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.006938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.007168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.007204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.007371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.007409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.007611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.007649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.007841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.007874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.008052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.008110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.008924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.008967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.009185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.009220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.009423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.009465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.009648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.009685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.009882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.009915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.010154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.010193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.010390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.010428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.010622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.010655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.010851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.010889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.011091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.011126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.011279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.011313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.011566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.011600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.011822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.011860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.012041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.012096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.012254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.012288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.012457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.012490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.012697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.012731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.012960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.012994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.013202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.013241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.013446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.013480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.013928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.013979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.014163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.014203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.014425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.014458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.014675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.014708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.014923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.014960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.015133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.015167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.015364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.015401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.015585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.015622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.015793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.015826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.315 qpair failed and we were unable to recover it. 00:36:17.315 [2024-07-26 16:41:37.016076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.315 [2024-07-26 16:41:37.016114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.016279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.016316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.016528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.016561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.016842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.016880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.017102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.017139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.017336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.017384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.017602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.017640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.017838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.017875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.018044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.018093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.018286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.018323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.018507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.018540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.018757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.018791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.019031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.019077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.019278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.019320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.019525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.019559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.019874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.019931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.020113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.020152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.020365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.020397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.020603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.020640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.020856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.020893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.021128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.021162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.021387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.021424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.021590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.021629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.021871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.021904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.022096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.022135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.022357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.022394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.022622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.022654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.022908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.022944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.023141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.023180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.023361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.023395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.023599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.023636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.023829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.023865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.024051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.024093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.024253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.024287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.024491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.024528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.024723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.024756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.024931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.024968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.025160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.025198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.025375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.025408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.025566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.025600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.025804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.025854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.026068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.026102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.026278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.026311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.026557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.026594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.026839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.026872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.027052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.027106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.027331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.027370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.027572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.027607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.027776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.027813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.028042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.028089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.028231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.028264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.028471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.028526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.028749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.028782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.028953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.028990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.029241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.029297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.029530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.029572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.029749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.316 [2024-07-26 16:41:37.029784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.316 qpair failed and we were unable to recover it. 00:36:17.316 [2024-07-26 16:41:37.029937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.029971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.030154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.030190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.030445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.030480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.030812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.030897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.031132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.031170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.031359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.031393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.031823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.031895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.032142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.032194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.032411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.032445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.032656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.032713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.032885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.032922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.033102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.033137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.033326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.033382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.033624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.033664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.033866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.033900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.034138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.034178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.034348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.034386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.034593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.034627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.034883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.034953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.035181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.035218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.035398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.035444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.035691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.035747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.035930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.035968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.036191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.036226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.036435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.036472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.036675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.036710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.036908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.036942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.037139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.037174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.037330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.037375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.037593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.037628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.037810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.037844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.038047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.038106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.038305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.038340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.038547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.038581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.038725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.038759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.038960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.038993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.039210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.039254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.039454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.039491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.039713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.039746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.039921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.039960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.040178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.040217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.040413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.040447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.040684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.040721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.040947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.040984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.041195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.041230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.041432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.041470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.041666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.041703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.041876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.041910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.042139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.042178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.042340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.042388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.042561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.042595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.042745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.042780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.042956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.042991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.043247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.043282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.043487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.043526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.043730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.043764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.043944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.043979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.044213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.044267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.044478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.044530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.044735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.044769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.044977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.045014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.045238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.045273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.045441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.045474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.045682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.045735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.045933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.045971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.046180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.046214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.046370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.046414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.317 [2024-07-26 16:41:37.046594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.317 [2024-07-26 16:41:37.046628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.317 qpair failed and we were unable to recover it. 00:36:17.318 [2024-07-26 16:41:37.046815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.318 [2024-07-26 16:41:37.046848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.318 qpair failed and we were unable to recover it. 00:36:17.318 [2024-07-26 16:41:37.047070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.318 [2024-07-26 16:41:37.047124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.318 qpair failed and we were unable to recover it. 00:36:17.318 [2024-07-26 16:41:37.047303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.318 [2024-07-26 16:41:37.047337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.318 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.047523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.047556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.047757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.047791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.047961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.047998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.048182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.048216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.048407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.048445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.048666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.048708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.048926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.048959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.049191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.049229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.049438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.049472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.049618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.049651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.049796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.049829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.050006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.050047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.050256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.050290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.050585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.050659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.050870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.050912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.051093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.051129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.051333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.051380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.051558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.051592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.051798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.051832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.051985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.052034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.052258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.052296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.052515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.052565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.052820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.052877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.053095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.053146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.053314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.053359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.053648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.053726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.053960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.053994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.054166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.054200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.054425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.054462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.054626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.054663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.054885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.054919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.055158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.055196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.055397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.055436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.055672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.055706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.055877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.055914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.056139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.056177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.056397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.585 [2024-07-26 16:41:37.056430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.585 qpair failed and we were unable to recover it. 00:36:17.585 [2024-07-26 16:41:37.056600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.056637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.056854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.056891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.057100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.057135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.057361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.057414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.057618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.057658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.057864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.057898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.058105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.058144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.058313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.058362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.058562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.058601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.058895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.058935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.059117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.059151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.059372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.059407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.059584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.059619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.059843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.059880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.060081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.060116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.060334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.060396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.060604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.060644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.060847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.060883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.061090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.061129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.061349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.061387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.061557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.061591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.061824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.061879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.062081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.062119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.062316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.062353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.062567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.062621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.062830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.062869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.063094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.063129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.063330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.063378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.063584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.063624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.063806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.063839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.064005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.064052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.064238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.064277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.064500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.064533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.064829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.064890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.065159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.065197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.065399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.065434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.065615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.065648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.065800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.065834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.065984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.066018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.066244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.066282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.066488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.066525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.066693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.066727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.066946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.066984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.067211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.067248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.067450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.067486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.067705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.067739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.067946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.067999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.068231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.068265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.068441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.068483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.068679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.068717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.068914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.068958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.069162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.069201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.069430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.069468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.069651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.069686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.069875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.069912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.070119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.070154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.070334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.070374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.070573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.070610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.070802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.070840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.071028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.071081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.071310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.071358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.071517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.071554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.071761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.071795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.071987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.072024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.072238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.072275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.072497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.072530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.072787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.072847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.073079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.073117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.073285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.073320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.073507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.586 [2024-07-26 16:41:37.073541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.586 qpair failed and we were unable to recover it. 00:36:17.586 [2024-07-26 16:41:37.073746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.073784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.073977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.074010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.074238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.074275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.074519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.074554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.074759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.074793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.075034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.075079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.075246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.075283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.075492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.075526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.075701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.075735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.075921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.075959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.076142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.076176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.076370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.076408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.076626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.076663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.076883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.076917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.077140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.077178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.077370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.077407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.077602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.077636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.077835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.077886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.078081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.078123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.078325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.078365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.078543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.078578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.078775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.078813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.078994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.079028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.079197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.079231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.079442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.079494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.079670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.079704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.079907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.079957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.080155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.080194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.080355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.080388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.080587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.080624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.080811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.080849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.081053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.081095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.081329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.081396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.081581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.081622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.081836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.081870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.082048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.082089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.082328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.082365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.082542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.082576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.082731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.082765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.082944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.082978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.083139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.083175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.083394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.083431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.083652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.083689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.083884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.083918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.084113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.084153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.084324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.084369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.084566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.084600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.084795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.084859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.085074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.085111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.085287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.085321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.085576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.085610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.085786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.085819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.085989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.086022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.086242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.086283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.086485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.086523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.086719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.086753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.086941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.086979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.087156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.087194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.087411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.087449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.087710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.087768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.087932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.087971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.587 [2024-07-26 16:41:37.088185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.587 [2024-07-26 16:41:37.088220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.587 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.088423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.088460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.088657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.088694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.088900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.088934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.089104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.089138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.089303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.089350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.089555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.089590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.089788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.089828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.090016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.090066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.090249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.090283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.090502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.090557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.090736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.090773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.090950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.090986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.091225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.091264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.091482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.091516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.091696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.091730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.091953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.091987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.092205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.092243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.092441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.092474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.092756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.092791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.093051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.093100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.093299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.093353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.093551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.093586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.093802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.093841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.094024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.094079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.094285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.094323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.094557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.094595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.094822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.094856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.095055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.095101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.095296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.095333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.095570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.095603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.095805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.095843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.096034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.096092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.096292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.096326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.096606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.096639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.096865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.096899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.097053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.097099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.097269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.097316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.097515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.097553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.097741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.097774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.097965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.098002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.098226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.098274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.098483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.098517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.098799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.098831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.099019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.099089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.099290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.099325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.099603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.099661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.099853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.099889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.100072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.100108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.100306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.100343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.100508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.100544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.100746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.100780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.100976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.101015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.101272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.101307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.101482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.101517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.101717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.101750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.101928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.101967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.102231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.102266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.102434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.102482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.102699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.102737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.102934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.102968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.103154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.103194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.103422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.103459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.103629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.103663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.103945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.104017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.104234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.104269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.104435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.104469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.104743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.104801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.105030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.105079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.105258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.105293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.105572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.588 [2024-07-26 16:41:37.105611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.588 qpair failed and we were unable to recover it. 00:36:17.588 [2024-07-26 16:41:37.105808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.105844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.106040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.106080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.106281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.106318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.106510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.106547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.106769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.106807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.107012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.107049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.107223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.107266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.107477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.107510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.107793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.107848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.108041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.108088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.108314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.108348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.108664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.108737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.108955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.108992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.109217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.109252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.109498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.109554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.109780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.109814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.110068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.110102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.110304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.110341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.110569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.110603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.110852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.110886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.111095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.111133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.111323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.111360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.111576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.111609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.111839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.111886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.112133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.112170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.112381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.112413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.112714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.112771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.113000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.113037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.113247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.113281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.113469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.113506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.113698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.113737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.113963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.113997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.114209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.114255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.114465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.114501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.114699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.114733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.114946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.114983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.115186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.115220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.115423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.115456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.115727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.115788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.115995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.116028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.116215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.116249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.116591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.116629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.116852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.116888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.117066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.117100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.117300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.117362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.117582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.117618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.117822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.117859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.118039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.118092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.118262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.118295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.118475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.118508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.118854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.118913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.119131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.119169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.119353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.119386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.119609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.119647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.119867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.119915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.120122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.120156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.120309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.120342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.120524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.120557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.120777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.120810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.121033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.121080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.121278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.121315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.121500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.121533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.121678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.121711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.121935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.121972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.122179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.122213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.122389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.122423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.122614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.589 [2024-07-26 16:41:37.122651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.589 qpair failed and we were unable to recover it. 00:36:17.589 [2024-07-26 16:41:37.122849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.122882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.123054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.123100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.123323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.123370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.123596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.123630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.123801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.123837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.124027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.124072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.124251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.124290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.124472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.124506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.124651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.124685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.124863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.124896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.125107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.125145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.125317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.125365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.125571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.125604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.125749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.125783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.126008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.126055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.126258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.126291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.126442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.126476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.126670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.126707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.126907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.126940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.127143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.127180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.127372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.127409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.127602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.127635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.127826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.127863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.128030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.128075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.128307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.128341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.128508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.128556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.128706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.128743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.128927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.128960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.129154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.129192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.129352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.129389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.129590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.129624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.129865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.129898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.130103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.130140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.130346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.130392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.130588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.130625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.130832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.130869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.131085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.131120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.131329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.131366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.131539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.131576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.131772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.131804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.131979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.132016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.132248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.132286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.132489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.132522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.132719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.132757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.132952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.132989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.133172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.133206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.133360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.133398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.133627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.133665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.133860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.133894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.134147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.134184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.134435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.134473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.134663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.134707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.134885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.134919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.135097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.135131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.135348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.135382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.135543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.135576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.135781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.135818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.136049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.136089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.136300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.136334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.136514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.136548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.136735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.136768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.136915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.136948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.590 [2024-07-26 16:41:37.137134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.590 [2024-07-26 16:41:37.137168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.590 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.137388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.137421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.137608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.137645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.137802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.137839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.138033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.138074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.138305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.138349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.138547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.138584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.138814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.138848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.139055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.139096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.139267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.139300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.139488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.139521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.139732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.139765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.139966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.140016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.140274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.140308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.140469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.140508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.140679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.140715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.140890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.140923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.141122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.141160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.141388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.141422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.141601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.141635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.141835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.141872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.142053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.142096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.142320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.142362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.142564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.142601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.142794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.142837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.143062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.143097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.143251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.143285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.143471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.143506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.143711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.143744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.143975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.144012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.144233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.144273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.144469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.144503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.144681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.144714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.144949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.144982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.145162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.145195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.145364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.145401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.145588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.145624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.145843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.145876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.146102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.146153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.146352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.146387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.146563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.146605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.146839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.146890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.147092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.147129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.147318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.147367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.147579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.147617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.147823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.147857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.148008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.148070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.148282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.148320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.148528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.148565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.148743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.148777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.148973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.149008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.149201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.149236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.149399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.149442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.149671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.149722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.149899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.149937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.150122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.150169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.150407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.150445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.150623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.150665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.150897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.150931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.151132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.151170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.151407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.151442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.151620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.151654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.151862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.151899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.152116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.152151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.152324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.152373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.152600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.152639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.152855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.152889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.153093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.153128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.153314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.153361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.591 [2024-07-26 16:41:37.153563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.591 [2024-07-26 16:41:37.153600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.591 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.153791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.153831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.154051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.154110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.154313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.154362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.154567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.154612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.154813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.154852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.155073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.155115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.155321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.155365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.155536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.155573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.155790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.155836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.156009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.156053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.156258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.156300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.156544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.156583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.156753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.156788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.156940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.156991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.157157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.157195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.157422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.157456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.157638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.157687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.157883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.157920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.158098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.158132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.158286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.158321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.158582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.158634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.158873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.158911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.159164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.159200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.159353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.159387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.159563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.159596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.159819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.159874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.160079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.160113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.160286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.160329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.160546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.160585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.160752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.160789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.160961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.160996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.161178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.161217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.161414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.161452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.161632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.161666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.161886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.161929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.162137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.162175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.162359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.162397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.162622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.162677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.162900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.162938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.163149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.163196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.163356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.163398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.163547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.163592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.163759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.163805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.163990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.164023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.164196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.164229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.164433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.164468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.164641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.164679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.164902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.164951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.165191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.165226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.165435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.165468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.165708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.165747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.165921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.165955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.166126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.166162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.166327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.166366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.166544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.166577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.166734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.166768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.166918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.166966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.167218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.167276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.167477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.167518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.167720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.167773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.167986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.168039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.168248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.168284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.168472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.168506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.168699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.168750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.168934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.168969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.169183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.169222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.169414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.169448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.592 qpair failed and we were unable to recover it. 00:36:17.592 [2024-07-26 16:41:37.169604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.592 [2024-07-26 16:41:37.169639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.169834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.169893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.170079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.170132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.170357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.170410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.170610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.170663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.170820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.170859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.171066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.171101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.171274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.171331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.171532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.171593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.171758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.171813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.172034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.172091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.172261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.172313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.172541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.172597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.172788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.172839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.173054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.173100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.173309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.173368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.173629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.173685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.174006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.174077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.174294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.174328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.174519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.174557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.174876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.174944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.175169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.175203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.175366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.175399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.175597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.175633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.175825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.175861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.176064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.176097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.176243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.176276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.176465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.176498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.176705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.176742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.176987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.177021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.177221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.177254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.177434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.177467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.177636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.177673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.177843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.177895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.178131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.178165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.178346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.178379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.178553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.178586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.178761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.178797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.178980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.179017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.179205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.179239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.179400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.179437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.179599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.179636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.179863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.179899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.180105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.180139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.180317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.180361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.180559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.180593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.180766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.180803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.180982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.181023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.181221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.181255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.181405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.181438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.181616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.181649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.181828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.181862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.182039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.182082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.182240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.182273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.182447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.182480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.182683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.182720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.182938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.182975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.183160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.183194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.183372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.183406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.183584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.183619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.183822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.183855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.593 qpair failed and we were unable to recover it. 00:36:17.593 [2024-07-26 16:41:37.184071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.593 [2024-07-26 16:41:37.184119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.184316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.184354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.184540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.184596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.184809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.184861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.185044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.185089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.185272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.185312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.185493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.185532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.185769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.185806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.186033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.186077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.186274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.186306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.186461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.186493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.186652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.186686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.186889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.186952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.187160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.187195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.187381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.187433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.187701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.187767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.187954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.187988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.188193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.188228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.188453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.188492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.188715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.188752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.188950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.188988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.189176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.189212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.189435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.189471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.189710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.189764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.189948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.189982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.190144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.190178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.190383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.190441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.190649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.190701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.190880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.190917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.191156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.191210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.191423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.191475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.191707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.191762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.191940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.191993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.192195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.192250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.192450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.192504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.192683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.192735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.192931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.192970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.193179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.193232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.193427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.193479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.193690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.193741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.193910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.193945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.194136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.194188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.194415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.194472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.194686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.194739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.194904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.194939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.195115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.195153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.195332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.195367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.195551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.195603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.195776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.195811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.195997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.196036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.196221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.196277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.196507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.196560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.196760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.196799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.197041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.197103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.197338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.197392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.197611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.197664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.197885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.197920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.198159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.198212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.198417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.198469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.198672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.198730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.198953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.198987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.199168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.199222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.199457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.199515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.199741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.199799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.200011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.200046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.200267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.200321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.200530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.200595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.200800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.200853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.594 [2024-07-26 16:41:37.201044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.594 [2024-07-26 16:41:37.201087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.594 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.201284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.201352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.201559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.201611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.201832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.201884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.202067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.202103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.202309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.202364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.202614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.202667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.202958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.203025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.203274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.203313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.203508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.203545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.203751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.203788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.203953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.203990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.204188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.204231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.204433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.204482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.204673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.204712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.204921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.204956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.205143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.205178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.205359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.205393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.205565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.205599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.205775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.205813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.206018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.206053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.206236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.206283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.206479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.206516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.206723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.206776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.206975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.207010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.207186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.207239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.207477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.207530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.207795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.207852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.208052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.208107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.208292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.208327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.208503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.208558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.208732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.208785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.208958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.208992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.209212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.209272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.209507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.209560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.209730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.209769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.209943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.209977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.210142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.210176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.210355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.210394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.210593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.210627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.210880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.210939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.211123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.211157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.211335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.211369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.211573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.211610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.211799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.211837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.212035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.212082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.212271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.212309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.212502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.212539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.212790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.212847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.213075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.213109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.213285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.213319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.213535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.213571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.213813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.213867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.214111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.214145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.214320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.214371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.214569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.214602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.214780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.214817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.215010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.215047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.215249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.215282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.215454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.215491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.215756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.215795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.216032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.216091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.216283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.216320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.216589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.216658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.595 [2024-07-26 16:41:37.217038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.595 [2024-07-26 16:41:37.217116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.595 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.217308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.217342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.217544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.217595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.217787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.217825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.217988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.218024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.218207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.218290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.218652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.218720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.218939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.218977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.219177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.219210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.219406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.219443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.219612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.219650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.219848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.219885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.220090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.220126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.220312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.220365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.220591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.220647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.221026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.221104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.221301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.221351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.221583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.221635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.221866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.221928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.222119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.222171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.222402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.222453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.222670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.222721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.222903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.222937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.223167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.223218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.223451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.223501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.223696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.223747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.223932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.223966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.224179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.224214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.224421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.224472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.224674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.224725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.224943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.224975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.225173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.225224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.225415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.225465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.225670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.225721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.225925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.225959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.226182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.226233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.226492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.226542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.226735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.226785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.226996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.227030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.227216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.227270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.227444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.227481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.227751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.227807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.227992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.228032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.228246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.228281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.228496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.228534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.228753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.228790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.228982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.229019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.229201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.229236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.229504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.229568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.229780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.229819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.230012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.230050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.230259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.230293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.230542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.230578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.230890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.230947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.231220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.231258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.231467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.231518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.231891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.231958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.232203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.232237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.232435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.232472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.232741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.232781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.233038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.233091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.233263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.233297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.233515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.233548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.233804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.233841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.234074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.234125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.234303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.234336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.234640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.234707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.234918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.234954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.235169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.235214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.235396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.596 [2024-07-26 16:41:37.235429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.596 qpair failed and we were unable to recover it. 00:36:17.596 [2024-07-26 16:41:37.235653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.235690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.235980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.236046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.236290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.236323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.236543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.236580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.236895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.236953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.237186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.237220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.237416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.237452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.237787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.237856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.238123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.238157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.238392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.238428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.238671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.238708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.239045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.239123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.239338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.239389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.239566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.239602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.239829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.239865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.240099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.240132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.240313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.240363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.240545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.240578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.240798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.240834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.241013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.241049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.241233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.241267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.241429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.241463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.241618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.241670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.242012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.242082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.242281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.242319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.242561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.242598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.242861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.242898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.243106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.243140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.243319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.243352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.243536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.243569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.243788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.243824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.244043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.244103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.244277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.244311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.244514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.244550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.244768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.244805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.245035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.245100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.245292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.245325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.245527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.245579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.245761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.245794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.246014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.246051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.246265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.246297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.246482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.246515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.246694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.246727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.246888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.246924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.247091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.247125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.247302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.247352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.247572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.247609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.247907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.247969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.248206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.248240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.248399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.248431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.248628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.248662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.248859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.248900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.249123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.249161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.249346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.249379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.249600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.249636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.249809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.249845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.250048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.250086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.250286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.250322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.250540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.250577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.250782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.250815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.250988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.251025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.251234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.251267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.251439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.251472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.251642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.251675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.251854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.251887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.252137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.252171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.252320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.252369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.252571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.252605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.597 [2024-07-26 16:41:37.252752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.597 [2024-07-26 16:41:37.252784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.597 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.252970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.253002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.253225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.253259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.253431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.253464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.253696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.253732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.253935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.253986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.254205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.254238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.254462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.254498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.254710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.254759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.255032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.255077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.255279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.255312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.255517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.255553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.255722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.255765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.255986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.256022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.256253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.256289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.256459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.256492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.256644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.256677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.256912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.256948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.257148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.257181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.257374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.257410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.257613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.257646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.257831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.257864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.258071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.258107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.258272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.258313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.258509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.258541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.258688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.258740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.258957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.259001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.259236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.259269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.259464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.259500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.259695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.259732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.259922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.259955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.260147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.260195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.260358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.260394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.260573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.260606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.260802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.260853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.261050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.261095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.261299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.261331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.261565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.261601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.261782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.261818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.262012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.262045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.262250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.262286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.262474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.262510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.262674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.262707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.262901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.262937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.263131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.263168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.263395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.263428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.263613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.263646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.263825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.263859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.264071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.264105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.264311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.264348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.264557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.264590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.264770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.264803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.264995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.265031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.265254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.265288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.265490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.265523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.265752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.265785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.265982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.266016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.266201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.266234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.266437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.266486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.266706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.266743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.266935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.266969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.267176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.267213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.267432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.598 [2024-07-26 16:41:37.267466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.598 qpair failed and we were unable to recover it. 00:36:17.598 [2024-07-26 16:41:37.267647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.267685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.267837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.267887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.268087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.268125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.268333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.268365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.268559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.268596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.268760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.268797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.268999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.269032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.269260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.269295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.269533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.269567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.269710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.269743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.269917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.269950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.270141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.270179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.270355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.270388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.270578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.270614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.270811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.270847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.271070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.271103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.271324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.271361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.271576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.271612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.271835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.271868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.272037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.272076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.272279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.272329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.272500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.272533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.272697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.272731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.272952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.272988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.273160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.273193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.273370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.273403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.273650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.273687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.273884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.273918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.274127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.274164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.274338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.274374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.274540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.274572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.274763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.274800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.274993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.275040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.275209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.275243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.275439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.275476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.275691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.275728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.275931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.275964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.276140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.276174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.276370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.276403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.276573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.276605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.276798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.276838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.277053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.277097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.277327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.277360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.277558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.277594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.277818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.277855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.278076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.278110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.278340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.278376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.278607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.278640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.278837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.278871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.279074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.279112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.279318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.279355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.279552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.279585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.279756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.279793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.279987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.280023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.280242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.280275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.280486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.280519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.280720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.280767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.280948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.280981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.281204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.281241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.281419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.281451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.281629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.281662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.281865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.281901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.282075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.282112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.282341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.282374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.282536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.282572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.282785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.282821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.283021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.283054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.283242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.283275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.283495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.599 [2024-07-26 16:41:37.283531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.599 qpair failed and we were unable to recover it. 00:36:17.599 [2024-07-26 16:41:37.283737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.283770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.284001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.284038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.284260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.284299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.284519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.284552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.284778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.284815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.285033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.285087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.285312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.285345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.285550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.285586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.285775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.285811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.285979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.286013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.286166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.286199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.286419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.286462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.286664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.286696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.286909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.286943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.287140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.287174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.287349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.287381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.287581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.287618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.287843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.287880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.288067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.288101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.288298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.288335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.288552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.288589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.288802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.288835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.289012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.289045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.289280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.289314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.289482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.289515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.289673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.289716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.289891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.289931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.290108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.290151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.290303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.290336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.290552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.290589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.290803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.290836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.291021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.291071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.291246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.291279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.291431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.291465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.291642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.291678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.291888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.291924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.292119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.292152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.292365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.292401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.292619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.292654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.292845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.292877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.293072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.293109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.293274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.293311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.293515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.293548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.293724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.293760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.293974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.294009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.294236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.294270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.294475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.294511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.294670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.294706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.294915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.294947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.295119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.295156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.295360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.295396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.295562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.295598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.295822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.295858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.296053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.296092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.296247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.296280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.296489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.296525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.296751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.296787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.296992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.297025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.297256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.297289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.297505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.297537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.297712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.600 [2024-07-26 16:41:37.297745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.600 qpair failed and we were unable to recover it. 00:36:17.600 [2024-07-26 16:41:37.297925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.297957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.298104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.298137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.298309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.298341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.298542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.298577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.298769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.298805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.298997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.299029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.299181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.299214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.299377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.299413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.299636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.299668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.299835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.299871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.300093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.300130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.300324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.300356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.300556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.300592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.300762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.300798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.301036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.301075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.301273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.301309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.301525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.301570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.301767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.301799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.301953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.301985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.302210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.302246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.302449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.302482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.302707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.302743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.302937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.302973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.303177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.303210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.303364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.303407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.303599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.303635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.303863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.303895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.304095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.304132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.304340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.304387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.304589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.304623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.304806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.304869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.305027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.305075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.305254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.305286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.305507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.305557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.305749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.305785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.305959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.305991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.306155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.306188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.306365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.306397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.306601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.306634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.306822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.306867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.307069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.307121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.307275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.307308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.307469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.307501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.307709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.307744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.307974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.308007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.308200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.308233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.308444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.308477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.308618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.308650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.308910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.308947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.309119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.309152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.309289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.309321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.309514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.309555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.309722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.309758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.309977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.310009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.310203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.310235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.310433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.310469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.310671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.310703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.310934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.310969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.311149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.311181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.311361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.311393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.311576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.311612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.311813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.311849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.312053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.312090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.312250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.312282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.312476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.312523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.312727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.312759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.312970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.313006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.313195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.313228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.313419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.313451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.313654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.601 [2024-07-26 16:41:37.313690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.601 qpair failed and we were unable to recover it. 00:36:17.601 [2024-07-26 16:41:37.313916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.313952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.314162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.314196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.314404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.314440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.314634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.314669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.314855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.314887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.315090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.315127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.315341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.315373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.315574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.315606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.315792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.315824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.315972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.316010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.316200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.316233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.316436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.316473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.316656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.316692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.316862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.316894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.317095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.317132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.317314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.317350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.317571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.317604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.317841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.317873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.318073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.318122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.318332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.318370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.318585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.318622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.318828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.318860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.319031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.319068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.319230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.319264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.319460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.319507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.319693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.319726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.319873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.319915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.320152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.320185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.320364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.320396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.320543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.320576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.320745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.320781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.320929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.320965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.321171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.321209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.321417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.321450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.321627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.321660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.321831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.321867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.322056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.322100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.322275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.322307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.322491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.322523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.322742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.322775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.322950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.322986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.323162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.323199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.323409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.323445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.323629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.323662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.323847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.323879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.324089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.324126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.324325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.324359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.324570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.324620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.324842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.324877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.325079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.325112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.325277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.325314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.325526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.325559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.325756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.325789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.325996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.326032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.326240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.326276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.326478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.326511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.326744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.326780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.326979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.327015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.327201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.327235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.327418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.327451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.327632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.327664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.327863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.327895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.328086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.328120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.328320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.602 [2024-07-26 16:41:37.328363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.602 qpair failed and we were unable to recover it. 00:36:17.602 [2024-07-26 16:41:37.328543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.328575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.328756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.328789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.328979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.329026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.329305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.329353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.329557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.329593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.329796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.329848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.330070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.330104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.330307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.330340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.330512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.330546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.330774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.330825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.331023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.331070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.331227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.331259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.331490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.331539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.331745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.331795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.332001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.332033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.332196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.332230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.332433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.332499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.332726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.332777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.332955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.332988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.333183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.333217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.333390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.333442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.333681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.333731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.333930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.333965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.334159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.334211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.334378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.334431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.334698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.334751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.334941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.334975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.335187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.335239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.335472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.335524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.335732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.335783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.335942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.335974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.336157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.336210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.336385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.336437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.603 [2024-07-26 16:41:37.336609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.603 [2024-07-26 16:41:37.336661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.603 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.336824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.336858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.337029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.337080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.337254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.337306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.337491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.337553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.337779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.337833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.337979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.338012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.338203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.338254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.338446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.338496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.338703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.338755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.338924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.338968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.339173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.339224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.339448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.339500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.339693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.339750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.339901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.339934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.340088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.340123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.340294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.340345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.340530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.340581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.340786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.340819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.340994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.341026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.341235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.341288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.341464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.341520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.341722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.341772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.341950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.341987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.342213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.342265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.342466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.342517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.342755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.342805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.343014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.343046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.343264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.343315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.343547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.343582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.343798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.343832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.344007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.344040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.344266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.344317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.344520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.344572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.344774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.344825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.345015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.345048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.345272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.345322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.345539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.345591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.345789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.345839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.346016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.346054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.346306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.346357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.346569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.873 [2024-07-26 16:41:37.346619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.873 qpair failed and we were unable to recover it. 00:36:17.873 [2024-07-26 16:41:37.346791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.346842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.346999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.347033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.347263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.347312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.347522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.347572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.347775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.347826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.348057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.348097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.348266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.348317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.348526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.348577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.348759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.348809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.348988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.349021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.349206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.349257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.349444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.349495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.349703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.349753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.349929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.349962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.350171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.350224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.350421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.350482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.350688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.350737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.350920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.350954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.351158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.351209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.351442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.351507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.351748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.351799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.351966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.352003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.352213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.352264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.352433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.352467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.352672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.352732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.352901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.352935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.353170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.353221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.353430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.353480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.353720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.353778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.353989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.354022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.354265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.354316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.354536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.354587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.354763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.354806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.355011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.355044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.355325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.355376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.355567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.355618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.355831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.874 [2024-07-26 16:41:37.355864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.874 qpair failed and we were unable to recover it. 00:36:17.874 [2024-07-26 16:41:37.356148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.356200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.356448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.356499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.356688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.356739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.356952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.356985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.357193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.357245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.357552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.357603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.357820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.357871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.358123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.358161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.358426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.358469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.358780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.358833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.359074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.359108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.359340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.359400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.359599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.359649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.359874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.359923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.360152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.360205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.360404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.360455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.360662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.360712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.360875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.360908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.361135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.361187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.361382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.361416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.361649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.361698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.361881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.361914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.362099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.362133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.362298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.362349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.362558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.362612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.362819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.362852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.363038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.363103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.363358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.363408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.363596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.363645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.363852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.363899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.364112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.364150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.364373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.364424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.364623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.364674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.364838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.364870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.365071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.365122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.365326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.365384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.365653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.365709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.875 qpair failed and we were unable to recover it. 00:36:17.875 [2024-07-26 16:41:37.365936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.875 [2024-07-26 16:41:37.365970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.366180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.366232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.366477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.366529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.366731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.366781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.366980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.367013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.367226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.367278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.367499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.367550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.367751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.367803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.367957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.367990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.368208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.368260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.368496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.368546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.368784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.368835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.369072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.369121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.369326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.369377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.369638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.369688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.369910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.369962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.370147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.370181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.370381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.370435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.370685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.370735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.370917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.370949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.371103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.371137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.371350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.371406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.371618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.371676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.371890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.371922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.372256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.372306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.372512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.372562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.372768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.372816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.372987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.373021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.373272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.373323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.373510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.373560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.373778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.373829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.374031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.374079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.374247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.374298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.374504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.374562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.374762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.374818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.375011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.375044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.375286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.375336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.876 [2024-07-26 16:41:37.375546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.876 [2024-07-26 16:41:37.375595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.876 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.375831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.375882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.376071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.376106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.376305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.376355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.376597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.376657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.376867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.376917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.377145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.377197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.377406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.377457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.377647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.377698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.377845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.377878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.378074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.378107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.378306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.378357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.378584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.378635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.378839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.378889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.379114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.379166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.379365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.379421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.379636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.379687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.379841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.379878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.380027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.380072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.380241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.380294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.380500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.380550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.380745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.380795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.380984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.381018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.381261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.381311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.381520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.381579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.381784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.381833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.382064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.382099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.382305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.382360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.382581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.382631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.382838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.382888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.383089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.383123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.383368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.383425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.877 qpair failed and we were unable to recover it. 00:36:17.877 [2024-07-26 16:41:37.383660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.877 [2024-07-26 16:41:37.383710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.383904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.383953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.384133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.384167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.384407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.384463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.384668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.384718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.384926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.384977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.385161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.385194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.385356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.385406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.385619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.385669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.385881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.385914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.386128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.386179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.386378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.386429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.386685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.386736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.386913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.386945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.387105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.387143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.387383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.387435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.387673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.387725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.387899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.387942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.388176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.388228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.388431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.388480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.388717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.388767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.388957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.388990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.389235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.389287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.389469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.389520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.389764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.389815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.390010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.390047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.390280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.390331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.390544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.390594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.390832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.390881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.391071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.391105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.391338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.391398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.391620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.391670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.391903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.391952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.392158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.392193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.392391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.392452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.392649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.392699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.392854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.392887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.878 [2024-07-26 16:41:37.393112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.878 [2024-07-26 16:41:37.393165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.878 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.393310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.393344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.393560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.393610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.393820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.393854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.394030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.394086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.394247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.394281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.394477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.394527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.394753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.394807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.395018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.395056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.395294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.395347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.395532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.395587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.395809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.395843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.396035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.396088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.396311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.396358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.396536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.396589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.396760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.396795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.396982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.397016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.397222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.397257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.397470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.397525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.397754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.397805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.397981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.398017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.398213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.398270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.398482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.398537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.398778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.398812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.399003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.399036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.399258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.399312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.399521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.399572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.399779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.399832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.400026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.400085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.400290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.400341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.400559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.400611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.400840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.400901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.401071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.401109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.401289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.401341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.401550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.401612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.401882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.401934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.402109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.402162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.402348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.879 [2024-07-26 16:41:37.402409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.879 qpair failed and we were unable to recover it. 00:36:17.879 [2024-07-26 16:41:37.402624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.402675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.402879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.402912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.403094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.403132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.403315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.403369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.403630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.403683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.403866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.403905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.404132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.404168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.404452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.404489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.404679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.404722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.404895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.404931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.405126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.405162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.405362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.405418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.405639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.405700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.405908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.405961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.406146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.406203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.406420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.406484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.406706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.406759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.406920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.406954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.407152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.407209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.407461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.407511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.407741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.407794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.407952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.407997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.408221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.408277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.408531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.408571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.408773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.408818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.409002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.409043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.409279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.409315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.409548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.409585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.409809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.409854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.410021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.410072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.410273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.410311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.410520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.410556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.410722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.410757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.410932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.410964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.411130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.411164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.411373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.411406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.411562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.411594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.880 [2024-07-26 16:41:37.411757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.880 [2024-07-26 16:41:37.411789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.880 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.411970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.412013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.412205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.412237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.412410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.412442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.412625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.412657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.412806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.412856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.413026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.413063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.413245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.413277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.413431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.413481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.413676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.413713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.413901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.413936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.414146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.414178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.414368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.414400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.414604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.414637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.414848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.414884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.415088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.415121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.415266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.415298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.415462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.415494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.415685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.415720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.415993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.416030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.416332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.416373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.416554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.416586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.416792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.416827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.416994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.417029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.417204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.417237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.417398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.417430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.417630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.417667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.417879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.417915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.418124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.418156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.418363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.418395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.418573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.418609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.418796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.418831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.419030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.419075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.419247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.419283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.419495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.419532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.419722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.419757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.419953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.419988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.420205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.881 [2024-07-26 16:41:37.420238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.881 qpair failed and we were unable to recover it. 00:36:17.881 [2024-07-26 16:41:37.420441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.420473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.420647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.420679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.420862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.420894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.421077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.421110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.421261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.421293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.421474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.421509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.421700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.421736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.421934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.421970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.422168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.422201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.422372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.422404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.422573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.422609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.422805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.422841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.423093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.423142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.423286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.423318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.423503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.423536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.423714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.423746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.423930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.423962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.424147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.424179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.424327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.424367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.424564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.424599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.424837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.424869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.425055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.425112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.425282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.425314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.425509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.425546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.425767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.425804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.425982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.426017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.426204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.426237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.426390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.426432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.426637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.426672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.426879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.426915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.427142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.427175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.427331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.427363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.882 qpair failed and we were unable to recover it. 00:36:17.882 [2024-07-26 16:41:37.427546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.882 [2024-07-26 16:41:37.427595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.427810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.427846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.428013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.428053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.428249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.428286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.428460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.428492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.428635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.428667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.428814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.428846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.429036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.429082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.429277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.429309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.429461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.429493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.429642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.429674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.429821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.429853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.430007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.430040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.430223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.430256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.430467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.430502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.430681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.430717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.430921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.430953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.431191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.431228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.431408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.431440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.431673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.431708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.431890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.431922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.432102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.432135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.432284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.432317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.432518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.432554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.432749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.432782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.432938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.432974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.433203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.433239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.433471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.433503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.433696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.433728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.433908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.433941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.434155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.434188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.434369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.434406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.434585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.434617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.434768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.434800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.434981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.435014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.435249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.435283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.435456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.883 [2024-07-26 16:41:37.435489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-07-26 16:41:37.435668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.435700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.435901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.435936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.436096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.436133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.436302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.436334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.436565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.436600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.436826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.436862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.437029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.437084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.437283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.437315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.437478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.437510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.437687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.437719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.437936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.437968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.438167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.438200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.438393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.438428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.438621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.438653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.438807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.438839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.439029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.439066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.439255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.439287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.439480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.439512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.439705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.439741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.439952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.439984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.440167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.440201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.440360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.440393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.440608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.440655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.440843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.440876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.441078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.441114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.441335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.441372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.441608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.441644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.441841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.441873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.442056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.442094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.442238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.442270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.442424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.442474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.442667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.442700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.442853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.442885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.442950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:36:17.884 [2024-07-26 16:41:37.443245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.443295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.443508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.443545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.443758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.443798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.444002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.444049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.444237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.444274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.444521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.444561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.444800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.444834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.445007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.445041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.445244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.445279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.445467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.445536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.445720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.445753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.445965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.446003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.446226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.446259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.446448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.446482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.446671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.446706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.446879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.446914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.447108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.447142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.447334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.447386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.447614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.447650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.447809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.447843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.448050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.448094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.448320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.448366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.448547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.448580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.448753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.884 [2024-07-26 16:41:37.448789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-07-26 16:41:37.448993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.449034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.449225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.449259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.449461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.449508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.449701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.449738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.449955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.449988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.450158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.450193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.450415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.450467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.450664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.450699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.450867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.450900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.451107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.451158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.451341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.451373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.451588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.451620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.451800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.451832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.452024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.452066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.452221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.452254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.452483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.452518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.452697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.452729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.452900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.452935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.453150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.453182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.453350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.453381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.453557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.453589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.453787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.453840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.454019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.454055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.454255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.454290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.454475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.454508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.454681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.454712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.454905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.454941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.455129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.455166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.455363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.455395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.455577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.455609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.455791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.455827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.455996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.456028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.456216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.456249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.456400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.456432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.456607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.456639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.456824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.456857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.457006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.457038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.457281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.457327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.457520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.457575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.457777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.457829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.457985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.458017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.458212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.458246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.458445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.458488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.458700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.458739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.458915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.458951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.459158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.459190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.459388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.459424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.459598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.459633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.459791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.459827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.459993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.460030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.460191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.460223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.460430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.460465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.460655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.460690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.460893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.460929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.461148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.461196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.461422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.461469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.461662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.461716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.461962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.462020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.462188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.885 [2024-07-26 16:41:37.462222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.885 qpair failed and we were unable to recover it. 00:36:17.885 [2024-07-26 16:41:37.462418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.462470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.462669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.462720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.462869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.462902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.463093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.463128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.463341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.463379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.463573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.463609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.463795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.463832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.464016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.464049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.464221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.464253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.464478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.464528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.464770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.464807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.464999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.465035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.465243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.465276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.465521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.465556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.465754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.465790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.466018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.466054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.466258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.466290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.466463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.466498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.466700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.466747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.466927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.466963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.467151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.467184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.467360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.467409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.467606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.467641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.467831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.467872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.468070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.468120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.468296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.468328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.468494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.468529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.468759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.468795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.468986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.469021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.469217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.469249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.469450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.469486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.469684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.469724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.469944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.469980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.470189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.470221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.470406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.470454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.470801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.470838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.471040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.886 [2024-07-26 16:41:37.471087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.886 qpair failed and we were unable to recover it. 00:36:17.886 [2024-07-26 16:41:37.471301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.471334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.471503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.471553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.471775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.471825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.472012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.472044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.472254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.472287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.472456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.472507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.472712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.472750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.472962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.472998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.473245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.473282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.473579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.473633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.473796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.473832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.474036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.474078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.474295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.474327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.474508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.474544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.474751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.474788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.475044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.475108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.475266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.475298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.475477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.475510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.475706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.475741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.475911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.475947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.476180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.476214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.476396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.476445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.476664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.476700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.476926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.476962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.477126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.477158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.477347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.477383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.477606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.477650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.477907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.477943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.478147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.478179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.478389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.478425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.478619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.478651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.478845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.887 [2024-07-26 16:41:37.478881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.887 qpair failed and we were unable to recover it. 00:36:17.887 [2024-07-26 16:41:37.479078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.479127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.479305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.479337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.479558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.479594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.479817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.479852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.480073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.480106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.480290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.480322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.480568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.480604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.480827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.480863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.481038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.481080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.481281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.481313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.481486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.481518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.481719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.481754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.481975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.482011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.482220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.482252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.482448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.482483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.482677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.482713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.482904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.482939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.483118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.483151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.483329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.483381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.483631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.483666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.483863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.483899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.484067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.484116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.484350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.484398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.484640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.484692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.484921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.484972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.485148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.485182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.485412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.485462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.485671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.485723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.485954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.486004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.486213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.486246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.486457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.486508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.486691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.486742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.486945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.486978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.487131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.888 [2024-07-26 16:41:37.487164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.888 qpair failed and we were unable to recover it. 00:36:17.888 [2024-07-26 16:41:37.487337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.487392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.487575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.487625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.487861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.487913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.488071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.488104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.488297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.488337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.488574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.488625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.488818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.488868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.489042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.489083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.489241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.489275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.489493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.489557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.489730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.489769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.489974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.490011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.490215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.490247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.490436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.490472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.490674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.490710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.490908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.490940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.491118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.491151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.491304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.491338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.491605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.491640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.491867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.491903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.492110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.492143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.492350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.492382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.492552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.492587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.492783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.492819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.493075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.493125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.493298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.493330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.493531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.493567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.493825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.493861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.494043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.494081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.494260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.494293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.494475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.494507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.494675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.494712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.494969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.495005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.495209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.495242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.495431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.495468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.495664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.889 [2024-07-26 16:41:37.495700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.889 qpair failed and we were unable to recover it. 00:36:17.889 [2024-07-26 16:41:37.495918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.495954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.496157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.496191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.496450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.496506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.496743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.496780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.496969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.497010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.497270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.497303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.497510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.497543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.497710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.497746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.497970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.498007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.498222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.498255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.498475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.498510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.498680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.498716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.498975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.499012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.499214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.499247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.499447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.499484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.499703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.499740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.499928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.499964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.500168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.500202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.500378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.500425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.500598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.500650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.500883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.500933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.501141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.501174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.501381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.501414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.501595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.501628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.501826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.501859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.502040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.502082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.502279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.502312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.502676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.502742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.502917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.502950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.503163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.503196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.503373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.503424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.503652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.890 [2024-07-26 16:41:37.503704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.890 qpair failed and we were unable to recover it. 00:36:17.890 [2024-07-26 16:41:37.503854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.503887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.504097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.504130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.504334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.504384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.504587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.504637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.504862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.504914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.505107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.505144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.505338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.505389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.505611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.505662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.505865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.505898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.506093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.506144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.506319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.506373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.506573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.506624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.506797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.506835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.507047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.507088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.507305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.507356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.507543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.507596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.507863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.507913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.508132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.508184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.508416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.508467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.508657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.508709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.508885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.508917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.509117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.509169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.509333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.509367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.509561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.509614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.509794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.509827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.510007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.510039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.510246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.510296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.510471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.510522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.510722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.510771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.510944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.510977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.511166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.511227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.511456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.511515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.511712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.511751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.511928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.511962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.512135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.512186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.512405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.891 [2024-07-26 16:41:37.512441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.891 qpair failed and we were unable to recover it. 00:36:17.891 [2024-07-26 16:41:37.512604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.512641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.512835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.512867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.513085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.513119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.513307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.513357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.513554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.513590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.513786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.513822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.514023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.514064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.514267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.514299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.514450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.514482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.514670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.514721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.514905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.514941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.515142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.515175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.515364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.515400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.515615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.515651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.515865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.515914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.516119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.516152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.516330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.516368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.516530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.516566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.516789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.516825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.517032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.517069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.517218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.517250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.517445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.517481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.517737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.517772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.517988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.518024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.518210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.518243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.518392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.518441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.518631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.518667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.518893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.518929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.519108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.519140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.519352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.519388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.519612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.519648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.519876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.519912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.520142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.520177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.520398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.520434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.520627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.892 [2024-07-26 16:41:37.520663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.892 qpair failed and we were unable to recover it. 00:36:17.892 [2024-07-26 16:41:37.520855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.520891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.521107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.521140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.521339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.521371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.521546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.521581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.521804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.521840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.522046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.522085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.522242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.522275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.522475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.522512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.522758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.522795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.523016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.523051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.523236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.523269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.523418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.523450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.523638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.523674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.523870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.523906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.524083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.524116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.524289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.524322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.524496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.524533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.524759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.524791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.525026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.525069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.525241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.525273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.525476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.525508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.525705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.525746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.525940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.525976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.526154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.526187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.526387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.526422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.526588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.526624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.526793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.526825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.527002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.527034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.527220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.527252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.527428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.527460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.527627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.527663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.527838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.527871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.528051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.528092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.528321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.528357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.528551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.528587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.528801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.893 [2024-07-26 16:41:37.528833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.893 qpair failed and we were unable to recover it. 00:36:17.893 [2024-07-26 16:41:37.529029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.529072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.529307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.529340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.529520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.529552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.529702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.529735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.529945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.529978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.530153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.530186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.530328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.530378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.530552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.530600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.530793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.530826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.531046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.531088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.531278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.531314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.531503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.531534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.531736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.531772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.531961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.531997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.532192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.532224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.532417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.532453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.532682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.532715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.532911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.532942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.533141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.533177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.533398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.533434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.533635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.533667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.533884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.533919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.534089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.534125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.534298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.534330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.534533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.534568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.534795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.534835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.534998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.535030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.535219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.535252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.535455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.535490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.535675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.535707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.535902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.535937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.536159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.536195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.536419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.536451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.536637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.536669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.894 [2024-07-26 16:41:37.536893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.894 [2024-07-26 16:41:37.536929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.894 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.537130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.537163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.537400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.537432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.537604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.537636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.537846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.537878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.538109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.538145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.538369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.538405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.538631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.538663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.538844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.538879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.539074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.539124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.539300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.539333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.539491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.539526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.539742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.539779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.539950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.539982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.540149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.540182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.540404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.540440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.540643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.540675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.540874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.540929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.541134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.541171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.541377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.541409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.541604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.541640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.541833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.541870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.542064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.542097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.542271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.542307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.542494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.542530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.542717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.542749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.542936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.542972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.543195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.543232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.543462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.543494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.543696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.543732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.895 [2024-07-26 16:41:37.543932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.895 [2024-07-26 16:41:37.543969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.895 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.544172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.544209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.544410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.544446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.544675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.544711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.544941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.544973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.545160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.545192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.545339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.545371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.545505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.545548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.545740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.545777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.545969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.546004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.546176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.546209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.546427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.546463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.546658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.546693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.546918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.546950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.547194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.547227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.547441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.547477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.547696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.547728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.547931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.547966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.548162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.548199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.548370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.548403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.548554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.548586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.548778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.548814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.548993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.549025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.549203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.549236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.549429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.549465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.549698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.549732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.549944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.549976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.550177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.550213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.550442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.550475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.550671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.550707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.550928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.550964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.551153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.551186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.551385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.551422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.551636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.551668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.551842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.551874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.552071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.552107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.552331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.896 [2024-07-26 16:41:37.552367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.896 qpair failed and we were unable to recover it. 00:36:17.896 [2024-07-26 16:41:37.552568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.552601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.552827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.552859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.553008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.553040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.553224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.553256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.553461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.553497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.553723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.553759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.553979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.554011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.554191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.554224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.554407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.554440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.554619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.554652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.554842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.554878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.555081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.555133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.555274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.555306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.555501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.555537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.555728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.555764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.555986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.556018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.556171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.556204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.556427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.556463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.556661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.556693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.556871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.556904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.557127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.557163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.557397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.557429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.557602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.557638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.557859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.557895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.558104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.558138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.558315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.558347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.558506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.558539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.558739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.558771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.558994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.559030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.559266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.559299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.559475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.559507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.559684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.559720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.559927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.559963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.560156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.560188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.560407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.560454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.560656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.560691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.560889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.897 [2024-07-26 16:41:37.560921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.897 qpair failed and we were unable to recover it. 00:36:17.897 [2024-07-26 16:41:37.561119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.561155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.561326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.561361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.561553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.561586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.561782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.561818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.561987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.562022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.562222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.562255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.562421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.562457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.562677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.562713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.562888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.562920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.563079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.563112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.563317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.563353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.563548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.563580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.563743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.563779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.563954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.563990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.564159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.564191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.564346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.564396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.564601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.564638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.564837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.564869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.565040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.565079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.565255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.565291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.565456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.565488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.565692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.565728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.565929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.565964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.566182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.566214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.566420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.566456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.566642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.566677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.566876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.566908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.567106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.567143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.567333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.567369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.567565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.567597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.567800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.567836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.568004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.568040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.568211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.568243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.568442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.568477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.568660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.898 [2024-07-26 16:41:37.568696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.898 qpair failed and we were unable to recover it. 00:36:17.898 [2024-07-26 16:41:37.568868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.568900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.569101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.569138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.569334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.569370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.569562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.569594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.569771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.569806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.569992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.570028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.570228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.570283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.570512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.570544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.570737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.570772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.570937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.570969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.571117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.571150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.571298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.571347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.571547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.571580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.571779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.571815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.572038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.572086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.572288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.572320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.572546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.572582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.572753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.572788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.572992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.573024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.573180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.573213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.573410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.573445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.573616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.573648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.573831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.573863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.574076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.574112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.574309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.574341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.574528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.574563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.574738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.574781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.575027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.575070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.575296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.575328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.575516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.575552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.575748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.575781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.575954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.575986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.576160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.576193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.576367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.576400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.899 [2024-07-26 16:41:37.576596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.899 [2024-07-26 16:41:37.576631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.899 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.576842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.576874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.577052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.577089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.577324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.577360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.577558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.577595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.577791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.577828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.577983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.578017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.578177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.578209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.578380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.578412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.578566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.578603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.578836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.578873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.579032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.579069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.579244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.579276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.579449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.579482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.579633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.579665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.579860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.579895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.580053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.580119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.580291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.580323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.580472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.580504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.580699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.580734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.580894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.580926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.581148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.581184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.581378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.581413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.581620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.581652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.581857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.581893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.582110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.582147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.582326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.582358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.582535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.582567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.582803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.900 [2024-07-26 16:41:37.582838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.900 qpair failed and we were unable to recover it. 00:36:17.900 [2024-07-26 16:41:37.583040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.583078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.583278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.583314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.583510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.583546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.583734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.583767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.583936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.583968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.584145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.584179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.584356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.584388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.584561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.584593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.584791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.584827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.585014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.585045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.585203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.585236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.585391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.585423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.585599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.585632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.585776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.585808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.586008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.586045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.586216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.586248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.586468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.586508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.586672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.586708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.586900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.586932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.587131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.587167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.587343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.587379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.587573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.587605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.587760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.587791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.587964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.587996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.588174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.588207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.588350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.588381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.588607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.588643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.588821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.588866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.589015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.589071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.589282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.589314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.589495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.589527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.589725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.589761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.589980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.590015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.590218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.590250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.590443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.901 [2024-07-26 16:41:37.590479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.901 qpair failed and we were unable to recover it. 00:36:17.901 [2024-07-26 16:41:37.590676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.590712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.590918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.590949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.591167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.591203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.591399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.591435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.591627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.591659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.591829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.591865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.592054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.592096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.592275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.592307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.592476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.592512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.592714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.592746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.592938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.592988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.593211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.593244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.593423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.593455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.593629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.593662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.593853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.593889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.594118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.594151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.594295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.594327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.594465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.594513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.594745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.594777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.594941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.594973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.595155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.595191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.595390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.595430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.595598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.595630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.595833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.595868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.596094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.596130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.596329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.596361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.596557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.596592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.596796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.596829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.597030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.597066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.597207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.597239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.597434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.597470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.597639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.597671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.597839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.597875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.598087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.598124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.598344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.598376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.902 [2024-07-26 16:41:37.598531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.902 [2024-07-26 16:41:37.598563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.902 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.598720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.598770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.599024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.599067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.599293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.599325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.599527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.599564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.599761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.599793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.599962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.599998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.600182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.600216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.600352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.600384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.600569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.600605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.600766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.600802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.601023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.601056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.601204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.601236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.601408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.601443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.601642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.601681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.601853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.601886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.602143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.602176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.602381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.602414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.602585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.602617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.602810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.602846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.603067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.603100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.603305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.603347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.603527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.603559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.603768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.603799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.604018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.604054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.604257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.604289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.604467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.604503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.604666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.604702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.604886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.604922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.605128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.605160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.605350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.605386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.605573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.605606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.605807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.605840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.606035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.606079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.606248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.606280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.903 [2024-07-26 16:41:37.606481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.903 [2024-07-26 16:41:37.606513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.903 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.606714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.606750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.606913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.606948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.607112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.607145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.607315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.607352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.607552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.607587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.607783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.607815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.608002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.608038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.608245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.608277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.608456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.608488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.608654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.608690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.608885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.608922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.609122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.609155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.609356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.609391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.609625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.609661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.609832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.609864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.610040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.610082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.610259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.610291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.610495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.610528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.610717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.610754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.610947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.610983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.611182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.611214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.611354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.611386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.611540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.611590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.611786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.611819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.612036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.612078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.612276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.612309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.612487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.612520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.612669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.612700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.612873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.612905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.613076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.613109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.613283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.613319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.613516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.613552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.613750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.613782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.614003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.614039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.614280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.904 [2024-07-26 16:41:37.614312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.904 qpair failed and we were unable to recover it. 00:36:17.904 [2024-07-26 16:41:37.614489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.614521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.614686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.614722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.614925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.614957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.615132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.615165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.615399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.615434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.615655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.615692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.615888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.615920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.616097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.616133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.616305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.616340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.616540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.616572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.616754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.616786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.616966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.616998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.617172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.617205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.617401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.617437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.617633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.617680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.617858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.617891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.618035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.618101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.618298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.618330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.618530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.618562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.618736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.618774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.618944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.618979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.619157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.619190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.619350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.619382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.619619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.619655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.619881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.619913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.620120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.620153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.620330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.905 [2024-07-26 16:41:37.620362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.905 qpair failed and we were unable to recover it. 00:36:17.905 [2024-07-26 16:41:37.620539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.906 [2024-07-26 16:41:37.620572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.906 qpair failed and we were unable to recover it. 00:36:17.906 [2024-07-26 16:41:37.620727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.906 [2024-07-26 16:41:37.620759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.906 qpair failed and we were unable to recover it. 00:36:17.906 [2024-07-26 16:41:37.620956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.906 [2024-07-26 16:41:37.620993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.906 qpair failed and we were unable to recover it. 00:36:17.906 [2024-07-26 16:41:37.621223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.906 [2024-07-26 16:41:37.621256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.906 qpair failed and we were unable to recover it. 00:36:17.906 [2024-07-26 16:41:37.621449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.906 [2024-07-26 16:41:37.621485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.906 qpair failed and we were unable to recover it. 00:36:17.906 [2024-07-26 16:41:37.621712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.906 [2024-07-26 16:41:37.621749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.906 qpair failed and we were unable to recover it. 00:36:17.906 [2024-07-26 16:41:37.621919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.906 [2024-07-26 16:41:37.621951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.906 qpair failed and we were unable to recover it. 00:36:17.906 [2024-07-26 16:41:37.622167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.906 [2024-07-26 16:41:37.622204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.906 qpair failed and we were unable to recover it. 00:36:17.906 [2024-07-26 16:41:37.622403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.906 [2024-07-26 16:41:37.622443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:17.906 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.622635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.622667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.622839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.622871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.623074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.623116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.623262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.623294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.623468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.623504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.623669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.623706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.623909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.623941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.624141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.624177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.624375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.624415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.624601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.624633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.624784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.624834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.625068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.625101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.625282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.625315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.625525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.625561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.625759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.625795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.625991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.626024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.626187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.626220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.626399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.626436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.626660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.626692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.626896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.626932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.627140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.627173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.627326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.627367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.627526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.627558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.627788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.627824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.628003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.628036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.628225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.628257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.628467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.628503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.628698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.628730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.628898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.628934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.629143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.629176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.629328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.629360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.629504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.629536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.629684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.629735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.629933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.629965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.630141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.630178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.630356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.630392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.630564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.630596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.630791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.630827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.630993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.631029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.631216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.631253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.631449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.631485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.631687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.631724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.631892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.175 [2024-07-26 16:41:37.631935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.175 qpair failed and we were unable to recover it. 00:36:18.175 [2024-07-26 16:41:37.632078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.632111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.632291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.632323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.632522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.632554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.632753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.632788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.632959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.632994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.633187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.633220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.633370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.633403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.633559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.633591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.633734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.633766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.633942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.633977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.634163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.634196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.634374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.634406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.634562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.634597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.634789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.634825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.635010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.635042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.635228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.635261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.635459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.635495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.635669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.635701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.635867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.635904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.636076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.636112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.636323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.636356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.636510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.636546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.636748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.636780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.636957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.636989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.637139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.637172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.637346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.637381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.637571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.637603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.637788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.637823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.637979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.638015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.638218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.638250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.638404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.638439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.638612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.638645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.638820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.638853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.639050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.639087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.639235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.639267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.639448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.639480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.639675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.639714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.176 [2024-07-26 16:41:37.639872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.176 [2024-07-26 16:41:37.639907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.176 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.640117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.640150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.640355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.640391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.640579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.640615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.640806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.640838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.640982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.641014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.641235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.641268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.641426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.641457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.641653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.641685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.641877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.641913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.642124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.642156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.642314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.642350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.642551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.642583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.642784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.642816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.643050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.643099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.643321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.643357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.643553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.643585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.643765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.643801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.643995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.644031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.644257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.644289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.644491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.644527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.644726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.644762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.645011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.645046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.645257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.645290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.645475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.645508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.645658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.645689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.645911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.645958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.646184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.646217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.646375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.646408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.646584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.646616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.646782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.646814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.646997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.647029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.647216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.647249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.647429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.647461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.647635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.647667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.647843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.647876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.648021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.648054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.648205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.177 [2024-07-26 16:41:37.648237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.177 qpair failed and we were unable to recover it. 00:36:18.177 [2024-07-26 16:41:37.648389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.648421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.648591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.648632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.648815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.648846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.649016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.649053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.649213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.649245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.649408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.649441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.649620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.649653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.649800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.649832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.650011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.650052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.650249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.650282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.650445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.650477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.650647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.650680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.650856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.650905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.651120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.651163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.651355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.651387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.651542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.651574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.651730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.651762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.651900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.651932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.652089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.652121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.652277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.652309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.652489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.652521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.652693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.652725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.652902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.652934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.653074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.653107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.653278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.653310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.653524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.653556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.653699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.653731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.653910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.653942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.654127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.654160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.654341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.654373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.654519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.654551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.654712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.654745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.654904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.654936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.655121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.655153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.655324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.655362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.655516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.655548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.655730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.655762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.178 [2024-07-26 16:41:37.655937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.178 [2024-07-26 16:41:37.655970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.178 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.656142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.656175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.656320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.656354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.656525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.656558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.656730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.656767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.656918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.656949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.657122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.657155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.657322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.657354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.657504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.657536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.657688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.657722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.657921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.657952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.658128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.658161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.658318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.658351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.658533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.658566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.658743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.658775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.658926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.658970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.659112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.659145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.659293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.659325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.659499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.659531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.659683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.659715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.659889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.659921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.660098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.660131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.660309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.660342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.660542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.660574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.660728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.660760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.660933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.660965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.661146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.661178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.661355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.661387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.661558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.661590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.661764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.661796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.661951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.661984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.179 [2024-07-26 16:41:37.662179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.179 [2024-07-26 16:41:37.662228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.179 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.662454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.662491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.662778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.662831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.662991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.663034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.663268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.663318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.663529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.663566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.663756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.663790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.664111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.664147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.664329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.664363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.664516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.664552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.664754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.664791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.664986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.665027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.665252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.665298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.665513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.665567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.665789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.665841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.666025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.666069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.666237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.666273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.666481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.666534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.666738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.666789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.666979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.667013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.667189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.667225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.667455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.667499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.667859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.667915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.668193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.668227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.668432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.668473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.668667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.668704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.668885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.668923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.669146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.669180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.669362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.669401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.669616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.669667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.669877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.669914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.670095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.670135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.670336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.670370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.670542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.670585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.670784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.670822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.670997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.671035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.671244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.671278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.671469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.180 [2024-07-26 16:41:37.671536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.180 qpair failed and we were unable to recover it. 00:36:18.180 [2024-07-26 16:41:37.671752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.671791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.672010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.672046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.672227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.672264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.672419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.672451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.672771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.672828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.673021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.673056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.673244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.673276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.673476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.673511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.673665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.673700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.673957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.674013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.674225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.674258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.674439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.674477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.674695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.674762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.674936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.674971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.675192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.675225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.675402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.675435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.675663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.675723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.675887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.675923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.676138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.676172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.676317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.676365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.676555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.676591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.676785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.676822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.677007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.677039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.677264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.677312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.677504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.677543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.677785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.677826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.678003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.678042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.678253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.678287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.678471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.678507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.678698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.678732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.678935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.678968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.679129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.679163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.679378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.679415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.679642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.679681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.679903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.679940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.680172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.680206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.680383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.680420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.181 [2024-07-26 16:41:37.680733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.181 [2024-07-26 16:41:37.680796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.181 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.680986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.681023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.681235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.681269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.681463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.681498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.681674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.681727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.681907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.681948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.682128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.682161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.682361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.682409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.682598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.682632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.682856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.682893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.683066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.683118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.683276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.683308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.683536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.683587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.683809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.683863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.684085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.684121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.684396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.684430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.684642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.684694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.684932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.684986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.685147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.685192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.685383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.685434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.685627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.685680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.685910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.685962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.686108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.686143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.686382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.686434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.686663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.686698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.686881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.686951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.687162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.687219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.687429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.687481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.687665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.687718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.687891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.687927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.688119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.688157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.688370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.688423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.688643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.688684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.689020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.689068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.689328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.689366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.689568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.689605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.689825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.689864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.690054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.690099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.182 qpair failed and we were unable to recover it. 00:36:18.182 [2024-07-26 16:41:37.690314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.182 [2024-07-26 16:41:37.690366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.690605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.690657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.690937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.690997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.691190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.691229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.691451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.691502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.691738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.691790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.691961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.691995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.692195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.692259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.692535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.692588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.692864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.692923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.693149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.693202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.693394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.693429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.693592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.693634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.693812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.693859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.694105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.694160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.694401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.694442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.694613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.694651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.694867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.694904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.695082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.695133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.695330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.695370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.695568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.695606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.695837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.695874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.696110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.696146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.696359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.696410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.696609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.696663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.696848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.696907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.697105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.697144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.697363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.697417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.697624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.697680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.697871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.697910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.698192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.698249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.698452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.698507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.698717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.698756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.698961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.698999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.699315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.699351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.699585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.699638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.699905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.699978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.700200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.183 [2024-07-26 16:41:37.700258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.183 qpair failed and we were unable to recover it. 00:36:18.183 [2024-07-26 16:41:37.700498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.700550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.700743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.700795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.700977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.701019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.701257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.701317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.701498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.701548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.701781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.701833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.702102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.702139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.702327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.702378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.702606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.702659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.702835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.702881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.703129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.703167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.703366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.703403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.703597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.703634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.703817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.703859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.704046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.704091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.704305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.704339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.704627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.704678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.704907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.704963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.705156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.705190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.705399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.705451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.705657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.705713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.705864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.705897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.706057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.706111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.706306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.706363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.706578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.706634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.706823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.706860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.707084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.707119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.707329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.707381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.707563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.707615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.707810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.707854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.708042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.708084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.708313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.708365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.184 [2024-07-26 16:41:37.708585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.184 [2024-07-26 16:41:37.708637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.184 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.708827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.708884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.709100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.709152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.709371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.709423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.709686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.709737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.709956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.709995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.710205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.710243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.710443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.710479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.710686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.710722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.710911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.710947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.711130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.711164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.711363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.711396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.711602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.711634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.711835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.711871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.712075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.712125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.712274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.712307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.712513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.712549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.712734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.712776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.712970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.713006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.713204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.713236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.713393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.713425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.713618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.713667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.713828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.713864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.714029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.714072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.714248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.714280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.714532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.714586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.714791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.714854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.715072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.715124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.715313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.715347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.715588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.715637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.715815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.715869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.716111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.716145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.716296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.716328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.716547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.716583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.716775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.716811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.717042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.717085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.717286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.717319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.717562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.185 [2024-07-26 16:41:37.717598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.185 qpair failed and we were unable to recover it. 00:36:18.185 [2024-07-26 16:41:37.717837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.717873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.718034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.718077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.718296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.718328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.718530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.718566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.718763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.718798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.719049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.719107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.719321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.719371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.719627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.719664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.719857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.719892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.720155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.720188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.720361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.720393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.720602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.720637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.720827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.720862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.721057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.721115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.721286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.721319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.721515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.721550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.721737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.721773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.721973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.722023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.722197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.722229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.722455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.722496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.722712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.722747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.722969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.723005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.723178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.723211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.723388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.723424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.723644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.723680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.723869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.723904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.724123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.724171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.724392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.724446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.724607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.724642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.724829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.724863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.725083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.725117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.725269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.725301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.725468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.725520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.725702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.725754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.725927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.725961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.726185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.726237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.726446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.726487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.186 [2024-07-26 16:41:37.726707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.186 [2024-07-26 16:41:37.726745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.186 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.726966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.727002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.727207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.727239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.727410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.727443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.727651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.727687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.727907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.727944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.728158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.728191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.728386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.728439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.728672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.728723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.728907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.728940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.729130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.729183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.729357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.729407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.729607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.729658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.729843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.729876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.730098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.730150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.730340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.730392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.730608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.730642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.730822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.730856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.731030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.731070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.731284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.731323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.731516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.731552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.731747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.731784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.731982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.732021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.732241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.732274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.732478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.732514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.732700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.732736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.732928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.732964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.733168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.733202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.733380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.733412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.733626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.733662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.733844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.733880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.734112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.734145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.734301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.734334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.734541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.734590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.734771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.734807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.734999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.735035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.735244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.735276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.187 [2024-07-26 16:41:37.735476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.187 [2024-07-26 16:41:37.735524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.187 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.735723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.735761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.735931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.735981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.736149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.736182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.736390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.736426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.736645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.736680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.736970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.737006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.737216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.737249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.737478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.737513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.737760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.737796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.737963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.737999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.738202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.738235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.738410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.738457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.738764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.738820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.739005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.739054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.739218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.739250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.739460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.739511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.739712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.739763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.739934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.739972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.740154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.740186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.740377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.740412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.740657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.740693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.740902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.740939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.741170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.741203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.741407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.741440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.741680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.741720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.741915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.741951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.742161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.742194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.742397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.742433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.742605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.742640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.742875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.742907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.743095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.743128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.743305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.743337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.188 [2024-07-26 16:41:37.743603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.188 [2024-07-26 16:41:37.743639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.188 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.743835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.743871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.744109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.744142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.744304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.744367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.744585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.744638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.744855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.744906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.745094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.745129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.745339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.745389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.745620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.745672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.745905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.745955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.746161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.746213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.746415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.746468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.746748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.746803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.746960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.746994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.747212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.747264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.747464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.747516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.747729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.747768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.747990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.748026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.748227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.748260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.748514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.748550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.748742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.748777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.748939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.748971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.749176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.749209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.749370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.749406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.749601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.749638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.749840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.749877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.750067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.750117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.750290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.750322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.750545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.750580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.750795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.750831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.751048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.751088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.751266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.751299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.751525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.751566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.751803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.751839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.752032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.752076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.752273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.752305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.752453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.752503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.752725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.189 [2024-07-26 16:41:37.752760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.189 qpair failed and we were unable to recover it. 00:36:18.189 [2024-07-26 16:41:37.752933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.752964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.753138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.753171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.753364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.753399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.753589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.753625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.753882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.753917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.754134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.754167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.754385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.754421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.754628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.754663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.754889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.754925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.755126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.755170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.755349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.755382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.755585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.755634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.755827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.755863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.756045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.756084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.756262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.756294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.756464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.756500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.756748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.756784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.756988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.757024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.757263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.757295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.757470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.757503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.757679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.757711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.757917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.757952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.758118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.758151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.758348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.758386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.758548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.758584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.758778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.758810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.759004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.759039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.759210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.759242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.759393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.759425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.759582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.759614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.759831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.759867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.760072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.760105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.760305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.760341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.760506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.760542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.760762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.760801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.760997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.761032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.761242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.761274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.190 [2024-07-26 16:41:37.761473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.190 [2024-07-26 16:41:37.761505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.190 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.761693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.761726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.761901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.761932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.762074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.762107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.762328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.762364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.762589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.762624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.762819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.762852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.763048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.763092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.763314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.763350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.763531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.763562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.763766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.763815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.764039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.764079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.764230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.764263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.764471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.764507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.764696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.764731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.764985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.765021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.765261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.765299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.765469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.765504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.765722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.765754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.765954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.765991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.766214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.766247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.766392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.766425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.766600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.766633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.766838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.766874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.767076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.767109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.767350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.767382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.767585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.767634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.767831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.767863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.768019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.768051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.768260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.768292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.768471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.768504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.768694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.768730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.768911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.768943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.769126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.769160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.769343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.769380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.769582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.769618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.769821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.769853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.770075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.191 [2024-07-26 16:41:37.770116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.191 qpair failed and we were unable to recover it. 00:36:18.191 [2024-07-26 16:41:37.770338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.770381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.770553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.770586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.770763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.770795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.770981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.771016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.771222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.771255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.771452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.771487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.771678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.771713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.771906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.771942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.772120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.772152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.772302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.772350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.772535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.772567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.772763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.772799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.772989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.773025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.773213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.773246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.773450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.773501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.773696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.773733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.773931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.773964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.774169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.774206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.774405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.774442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.774645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.774677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.774849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.774886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.775105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.775141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.775375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.775408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.775618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.775651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.775827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.775859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.776008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.776040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.776253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.776286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.776484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.776516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.776686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.776718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.776918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.776955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.777153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.777189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.777378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.777411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.777593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.777625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.777819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.777856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.778121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.778154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.778350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.778388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.778577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.778614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.778824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.192 [2024-07-26 16:41:37.778856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.192 qpair failed and we were unable to recover it. 00:36:18.192 [2024-07-26 16:41:37.779067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.779104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.779296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.779332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.779478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.779510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.779653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.779685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.779903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.779938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.780119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.780152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.780351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.780387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.780551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.780586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.780775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.780807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.781007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.781043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.781231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.781263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.781412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.781444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.781670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.781706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.781868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.781904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.782119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.782152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.782357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.782393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.782587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.782623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.782813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.782845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.783072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.783108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.783294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.783330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.783501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.783534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.783755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.783790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.783981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.784017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.784195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.784228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.784400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.784435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.784651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.784687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.784877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.784919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.785143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.785179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.785339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.785375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.785549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.785581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.785782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.785832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.786015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.193 [2024-07-26 16:41:37.786050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.193 qpair failed and we were unable to recover it. 00:36:18.193 [2024-07-26 16:41:37.786255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.786288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.786518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.786554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.786782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.786831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.787046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.787106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.787292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.787325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.787556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.787592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.787789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.787822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.787974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.788006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.788154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.788186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.788362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.788395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.788625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.788661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.788858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.788893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.789148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.789182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.789415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.789452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.789635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.789671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.789840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.789872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.790099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.790136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.790363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.790398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.790568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.790600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.790799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.790836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.790992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.791028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.791233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.791266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.791413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.791446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.791676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.791712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.791935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.791967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.792150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.792187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.792354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.792389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.792574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.792607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.792770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.792806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.793002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.793038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.793226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.793258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.793425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.793461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.793656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.793692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.793896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.793929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.794133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.794183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.794379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.794415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.794590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.794632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.194 [2024-07-26 16:41:37.794811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.194 [2024-07-26 16:41:37.794844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.194 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.795066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.795117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.795286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.795318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.795498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.795530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.795714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.795750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.795948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.795981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.796144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.796178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.796371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.796407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.796609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.796641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.796790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.796822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.797047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.797089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.797266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.797298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.797489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.797525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.797755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.797791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.797991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.798023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.798210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.798243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.798392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.798425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.798629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.798662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.798864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.798899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.799071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.799107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.799310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.799343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.799545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.799604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.799805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.799841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.800038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.800078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.800287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.800324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.800492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.800528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.800740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.800772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.800970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.801006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.801219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.801252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.801424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.801456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.801672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.801708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.801874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.801910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.802102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.802153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.802368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.802404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.802606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.802638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.802788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.802820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.803018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.803054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.803265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.195 [2024-07-26 16:41:37.803298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.195 qpair failed and we were unable to recover it. 00:36:18.195 [2024-07-26 16:41:37.803468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.803500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.803710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.803751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.803968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.804003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.804225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.804258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.804460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.804496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.804655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.804690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.804868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.804900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.805083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.805116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.805348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.805384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.805556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.805588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.805808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.805844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.806027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.806066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.806279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.806311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.806519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.806551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.806718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.806750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.806940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.806973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.807155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.807188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.807356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.807407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.807606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.807639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.807785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.807817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.807963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.808011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.808208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.808241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.808403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.808435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.808574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.808606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.808811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.808843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.809045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.809088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.809280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.809312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.809515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.809547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.809773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.809809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.809995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.810030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.810209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.810242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.810445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.810481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.810666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.810701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.810895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.810928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.811130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.811181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.811400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.811435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.811602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.196 [2024-07-26 16:41:37.811634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.196 qpair failed and we were unable to recover it. 00:36:18.196 [2024-07-26 16:41:37.811857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.811893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.812083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.812119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.812308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.812339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.812518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.812555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.812774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.812814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.813015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.813048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.813234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.813267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.813457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.813492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.813675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.813707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.813908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.813944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.814145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.814187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.814336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.814369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.814593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.814629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.814836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.814872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.815101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.815134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.815339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.815375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.815565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.815601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.815808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.815840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.816043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.816082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.816250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.816282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.816457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.816489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.816683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.816718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.816879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.816914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.817093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.817126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.817297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.817329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.817561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.817597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.817775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.817808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.818009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.818057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.818272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.818305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.818485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.818517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.818715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.818751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.818975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.819012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.819202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.819235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.819417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.819449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.197 qpair failed and we were unable to recover it. 00:36:18.197 [2024-07-26 16:41:37.819623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.197 [2024-07-26 16:41:37.819655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.819829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.819861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.820092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.820128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.820333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.820365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.820513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.820545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.820722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.820754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.820960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.820995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.821184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.821216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.821417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.821453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.821652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.821688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.821883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.821920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.822143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.822180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.822409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.822442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.822612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.822645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.822845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.822880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.823106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.823143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.823333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.823365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.823558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.823600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.823802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.823837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.824014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.824049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.824257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.824290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.824494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.824530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.824726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.824758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.824921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.824957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.825141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.825174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.825373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.825405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.825608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.825643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.825864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.825900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.826097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.826129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.826325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.826361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.826533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.826569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.826774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.826806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.827024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.827067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.827263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.827295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.827497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.827529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.827741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.198 [2024-07-26 16:41:37.827774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.198 qpair failed and we were unable to recover it. 00:36:18.198 [2024-07-26 16:41:37.827949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.827986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.828212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.828245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.828397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.828430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.828649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.828684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.828886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.828929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.829155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.829192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.829356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.829392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.829623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.829655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.829866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.829902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.830084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.830121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.830289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.830321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.830503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.830535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.830730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.830765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.830992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.831024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.831180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.831219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.831419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.831455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.831645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.831677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.831873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.831909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.832111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.832144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.832314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.832347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.832499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.832550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.832737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.832773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.832974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.833007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.833215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.833248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.833475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.833510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.833730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.833762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.833964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.833999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.834217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.834250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.834434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.834467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.834701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.834737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.834937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.834969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.835151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.835183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.835423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.835455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.835635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.835668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.835878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.835910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.836109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.836145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.836333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.836369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.199 qpair failed and we were unable to recover it. 00:36:18.199 [2024-07-26 16:41:37.836568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.199 [2024-07-26 16:41:37.836600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.836805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.836841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.837042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.837080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.837258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.837290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.837493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.837529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.837716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.837751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.837922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.837954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.838135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.838169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.838346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.838379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.838571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.838603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.838749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.838797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.839020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.839053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.839211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.839243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.839426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.839461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.839675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.839711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.839904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.839937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.840115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.840148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.840341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.840382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.840608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.840641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.840819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.840854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.841037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.841079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.841258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.841290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.841517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.841553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.841771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.841807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.842021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.842053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.842254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.842287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.842490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.842525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.842718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.842750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.842929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.842961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.843112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.843145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.843293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.843326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.843535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.843577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.843719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.843751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.843957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.843990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.844186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.844222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.844428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.844460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.844634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.844666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.844860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.200 [2024-07-26 16:41:37.844895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.200 qpair failed and we were unable to recover it. 00:36:18.200 [2024-07-26 16:41:37.845110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.845146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.845370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.845402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.845629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.845664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.845877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.845912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.846107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.846155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.846333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.846366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.846570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.846620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.846845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.846877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.847082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.847114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.847306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.847342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.847568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.847600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.847809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.847845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.848032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.848076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.848269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.848301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.848495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.848531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.848741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.848776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.849003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.849036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.849230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.849262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.849455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.849491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.849713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.849749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.849962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.849998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.850206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.850239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.850435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.850466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.850806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.850861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.851080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.851116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.851322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.851354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.851555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.851591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.851778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.851813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.852012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.852044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.852230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.852262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.852448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.852483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.852651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.852684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.852881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.852917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.853128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.853164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.853359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.853397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.853563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.853599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.853790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.853826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.201 qpair failed and we were unable to recover it. 00:36:18.201 [2024-07-26 16:41:37.854080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.201 [2024-07-26 16:41:37.854128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.854331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.854381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.854568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.854603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.854801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.854833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.855036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.855075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.855305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.855336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.855512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.855544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.855739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.855775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.855959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.855994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.856207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.856240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.856386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.856435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.856602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.856638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.856834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.856865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.857073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.857110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.857296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.857332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.857527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.857559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.857753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.857789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.857989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.858021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.858176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.858208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.858365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.858397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.858591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.858650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.858830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.858862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.859085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.859140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.859308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.859343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.859545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.859578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.859781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.859816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.860013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.860050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.860266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.860299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.860502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.860550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.860738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.860774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.860966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.860998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.861152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.861185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.861377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.861413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.202 [2024-07-26 16:41:37.861615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.202 [2024-07-26 16:41:37.861647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.202 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.861837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.861873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.862067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.862119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.862307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.862339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.862545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.862577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.862796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.862847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.863115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.863148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.863325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.863375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.863590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.863626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.863817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.863849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.864072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.864109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.864303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.864339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.864529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.864561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.864759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.864795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.865000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.865032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.865209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.865242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.865426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.865462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.865650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.865685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.865888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.865920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.866100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.866133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.866329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.866361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.866533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.866565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.866761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.866796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.866990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.867026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.867267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.867300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.867506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.867539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.867737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.867772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.867995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.868027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.868233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.868266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.868466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.868506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.868714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.868746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.868947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.868982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.869170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.869206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.869376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.869408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.869639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.869675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.869863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.869898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.870166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.870199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.870402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.203 [2024-07-26 16:41:37.870437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.203 qpair failed and we were unable to recover it. 00:36:18.203 [2024-07-26 16:41:37.870626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.870661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.870853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.870885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.871055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.871100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.871295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.871330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.871549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.871581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.871811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.871847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.872028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.872070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.872247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.872280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.872436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.872468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.872612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.872644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.872854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.872887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.873118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.873152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 822703 Killed "${NVMF_APP[@]}" "$@" 00:36:18.204 [2024-07-26 16:41:37.873374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.873412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.873608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.873649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.873853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.873889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 16:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:18.204 [2024-07-26 16:41:37.874082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.874129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 16:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:18.204 [2024-07-26 16:41:37.874334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.874379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.874580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 16:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:18.204 [2024-07-26 16:41:37.874617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.874816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.874849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 16:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:18.204 [2024-07-26 16:41:37.875026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.875071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 16:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.204 [2024-07-26 16:41:37.875237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.875271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.875437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.875473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.875661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.875694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.875889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.875924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.876076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.876123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.876348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.876382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.876586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.876622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.876785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.876821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.877019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.877055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.877274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.877306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.877483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.877519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.877719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.877751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.877950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.877987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.878203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.878237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.878446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.204 [2024-07-26 16:41:37.878479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.204 qpair failed and we were unable to recover it. 00:36:18.204 [2024-07-26 16:41:37.878800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.878862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.879075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.879135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.879311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.879350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.879546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.879582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.879769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.879804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.879978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.880010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.880194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.880227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.880411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.880448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 16:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=823378 00:36:18.205 [2024-07-26 16:41:37.880647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 16:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:18.205 [2024-07-26 16:41:37.880681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.880860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 16:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 823378 00:36:18.205 [2024-07-26 16:41:37.880893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.881071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.881104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 16:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 823378 ']' 00:36:18.205 [2024-07-26 16:41:37.881340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.881373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 16:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.205 16:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:18.205 [2024-07-26 16:41:37.881572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.881610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 16:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.205 [2024-07-26 16:41:37.881801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 16:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:18.205 [2024-07-26 16:41:37.881837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 16:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:18.205 [2024-07-26 16:41:37.882032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.882073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.882253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.882285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.882512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.882548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.882774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.882806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.883039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.883086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.883292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.883324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.883509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.883541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.883735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.883771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.883973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.884009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.884221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.884254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.884479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.884515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.884680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.884715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.884910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.884943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.885174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.885208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.885363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.885414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.885619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.885652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.885805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.885838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.886044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.886089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.886241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.205 [2024-07-26 16:41:37.886273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.205 qpair failed and we were unable to recover it. 00:36:18.205 [2024-07-26 16:41:37.886496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.886533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.886716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.886752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.886938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.886971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.887173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.887210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.887375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.887410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.887588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.887621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.887828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.887864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.888091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.888124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.888299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.888331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.888553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.888604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.888808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.888844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.889030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.889069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.889250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.889282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.889460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.889493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.889705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.889737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.889933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.889969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.890174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.890208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.890385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.890418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.890619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.890670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.890838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.890874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.891140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.891172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.891371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.891407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.891594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.891629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.891805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.891838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.892016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.892049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.892205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.892237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.892407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.892439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.892645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.892677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.892841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.892877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.893085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.893119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.893340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.893376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.893574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.893610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.893811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.893843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.894002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.894035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.894243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.894275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.894425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.894457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.894652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.894688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.206 qpair failed and we were unable to recover it. 00:36:18.206 [2024-07-26 16:41:37.894878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.206 [2024-07-26 16:41:37.894912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.895074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.895106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.895286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.895319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.895489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.895525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.895700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.895732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.895905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.895937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.896124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.896156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.896330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.896362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.896509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.896541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.896687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.896719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.896873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.896905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.897092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.897125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.897311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.897347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.897519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.897551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.897707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.897739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.897917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.897948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.898104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.898136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.898313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.898345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.898492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.898524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.898696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.898728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.898871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.898903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.899078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.899111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.899264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.899296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.899473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.899505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.899685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.899717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.899885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.899917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.900095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.900128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.900276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.900308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.900476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.900508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.900683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.900715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.900863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.900894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.901035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.901075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.901282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.207 [2024-07-26 16:41:37.901315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.207 qpair failed and we were unable to recover it. 00:36:18.207 [2024-07-26 16:41:37.901493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.901526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.901702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.901734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.901887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.901919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.902115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.902159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.902334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.902366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.902514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.902545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.902727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.902760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.902937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.902970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.903129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.903162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.903303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.903335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.903534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.903566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.903738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.903771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.903938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.903970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.904137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.904170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.904356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.904388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.904564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.904596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.904767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.904800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.905002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.905034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.905183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.905215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.905386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.905423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.905594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.905626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.905771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.905803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.906003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.906038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.906271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.906304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.906475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.906507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.906682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.906713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.906860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.906892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.907080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.907113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.907277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.907309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.907460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.907492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.907657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.907689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.907840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.907872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.908029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.908068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.908247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.908279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.908457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.908489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.908666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.908699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.908867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.908899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.208 [2024-07-26 16:41:37.909083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.208 [2024-07-26 16:41:37.909116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.208 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.909289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.909321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.909525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.909561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.909758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.909790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.909965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.909998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.910191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.910224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.910398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.910430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.910640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.910672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.910842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.910874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.911079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.911126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.911326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.911363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.911568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.911623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.911834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.911890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.912131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.912167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.912384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.912418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.912593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.912628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.912832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.912885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.913066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.913107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.913308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.913341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.913522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.913554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.913747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.913782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.913984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.914019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.914233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.914270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.914441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.914477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.914661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.914697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.914866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.914902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.915132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.915165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.915318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.915350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.915525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.915558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.915736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.915768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.915968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.916004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.916188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.916221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.916382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.916413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.916594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.916644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.916864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.916900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.917108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.917141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.917322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.917359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.917559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.917594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.209 qpair failed and we were unable to recover it. 00:36:18.209 [2024-07-26 16:41:37.917786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.209 [2024-07-26 16:41:37.917821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.918012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.918054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.918239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.918271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.918444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.918476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.918699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.918735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.918928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.918963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.919166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.919199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.919375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.919408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.919563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.919613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.919793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.919829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.920005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.920038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.920237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.920269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.920445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.920477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.920645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.920680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.920878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.920914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.921083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.921134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.921285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.921317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.921489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.921521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.921698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.921729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.921921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.921956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.922149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.922191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.922393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.922429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.922589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.922624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.922785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.922821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.923014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.923049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.923236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.923268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.923417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.923450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.923605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.923637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.923895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.923930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.924142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.924175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.924330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.924362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.924547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.924597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.924794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.924830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.925010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.925045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.210 [2024-07-26 16:41:37.925222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.210 [2024-07-26 16:41:37.925254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.210 qpair failed and we were unable to recover it. 00:36:18.479 [2024-07-26 16:41:37.925433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.925466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.925616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.925647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.925827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.925862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.926069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.926102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.926258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.926290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.926501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.926537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.926727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.926762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.926952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.926988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.927201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.927266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.927495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.927557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.927760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.927821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.928006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.928040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.928234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.928288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.928490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.928543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.928752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.928808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.928980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.929014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.929224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.929282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.929527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.929564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.929731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.929768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.929962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.929998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.930197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.930233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.930408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.930444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.930631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.930666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.930860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.930895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.931079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.931130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.931286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.931318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.931512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.931570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.931992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.932048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.932230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.932293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.932454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.932489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.932707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.932766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.932978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.933018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.933196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.933234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.933430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.933466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.933622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.933657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.933895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.933931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.934111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.934143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.934292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.934324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.934523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.934560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.934731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.934782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.934980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.935016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.935193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.935225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.935388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.480 [2024-07-26 16:41:37.935422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.480 qpair failed and we were unable to recover it. 00:36:18.480 [2024-07-26 16:41:37.935606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.935640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.935843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.935879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.936151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.936194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.936365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.936403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.936603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.936640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.936862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.936898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.937107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.937140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.937290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.937322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.937495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.937527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.937700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.937736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.937982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.938018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.938224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.938257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.938434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.938466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.938672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.938712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.938885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.938921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.939131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.939164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.939311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.939343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.939516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.939548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.939747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.939783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.939968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.940003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.940192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.940225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.940407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.940440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.940611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.940647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.940865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.940900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.941113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.941147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.941292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.941324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.941500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.941533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.941690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.941722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.941887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.941922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.942135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.942168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.942369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.942401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.942601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.942637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.942802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.942838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.943039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.943082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.943269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.943302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.943499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.943535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.943705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.481 [2024-07-26 16:41:37.943744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.481 qpair failed and we were unable to recover it. 00:36:18.481 [2024-07-26 16:41:37.943948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.943984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.944164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.944197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.944395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.944427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.944655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.944692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.944856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.944892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.945074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.945107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.945301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.945333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.945511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.945562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.945759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.945794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.945994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.946029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.946213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.946246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.946418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.946450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.946672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.946708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.946900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.946935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.947145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.947177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.947331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.947363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.947532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.947572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.947762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.947797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.947983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.948019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.948218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.948250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.948395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.948427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.948645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.948710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.948904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.948966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.949161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.949197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.949392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.949444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.949636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.949673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.949888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.949923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.950102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.950134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.950310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.950361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.950524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.950560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.950753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.950788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.950955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.950991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.951166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.951208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.951413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.951450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.951621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.951656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.951844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.951880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.952119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.952159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.482 [2024-07-26 16:41:37.952393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.482 [2024-07-26 16:41:37.952447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.482 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.952662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.952713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.952863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.952898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.953080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.953115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.953311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.953367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.953619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.953671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.953886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.953926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.954137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.954171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.954347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.954384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.954554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.954590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.954779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.954815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.954986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.955020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.955185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.955218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.955383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.955419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.955625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.955661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.955825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.955860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.956055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.956124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.956284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.956324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.956506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.956557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.956759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.956800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.956995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.957031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.957246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.957278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.957426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.957458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.957656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.957692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.957860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.957898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.958106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.958139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.958287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.958319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.958522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.958555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.958729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.958764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.958959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.958995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.959189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.959222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.959373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.959407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.959638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.959674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.959896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.959932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.960137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.960171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.960349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.960382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.960586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.960618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.960763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.483 [2024-07-26 16:41:37.960795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.483 qpair failed and we were unable to recover it. 00:36:18.483 [2024-07-26 16:41:37.961020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.961056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.961242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.961274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.961431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.961464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.961659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.961695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.961893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.961929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.962132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.962165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.962336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.962368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.962573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.962609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.962802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.962837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.963066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.963116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.963316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.963348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.963587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.963623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.963903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.963939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.964146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.964179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.964378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.964411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.964618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.964650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.964825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.964861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.965054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.965113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.965256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.965288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.965506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.965542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.965715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.965750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.965997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.966038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.966221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.966254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.966437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.966470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.966706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.966741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.966918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.966953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.967147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.967179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.967348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.967391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.967550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.967586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.484 [2024-07-26 16:41:37.967715] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:18.484 [2024-07-26 16:41:37.967777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.484 [2024-07-26 16:41:37.967813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.484 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.967842] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:18.485 [2024-07-26 16:41:37.967977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.968008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.968192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.968224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.968417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.968453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.968645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.968685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.968902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.968938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.969153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.969185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.969338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.969371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.969652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.969709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.969928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.969964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.970169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.970201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.970394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.970441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.970690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.970742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.970975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.971027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.971183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.971217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.971443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.971494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.971647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.971681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.971891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.971943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.972140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.972194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.972427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.972478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.972764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.972818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.973000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.973032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.973240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.973289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.973470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.973521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.973748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.973800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.973988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.974022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.974233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.974285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.974632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.974692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.974886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.974919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.975087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.975121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.975292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.975345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.975523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.975574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.975767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.975819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.976004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.976038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.976220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.976271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.976467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.976518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.976762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.976813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.976989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.485 [2024-07-26 16:41:37.977023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.485 qpair failed and we were unable to recover it. 00:36:18.485 [2024-07-26 16:41:37.977255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.977306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.977554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.977588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.977800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.977850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.978067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.978103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.978306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.978356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.978548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.978601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.978939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.978999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.979209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.979261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.979456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.979508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.979894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.979969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.980193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.980229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.980404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.980440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.980777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.980836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.981043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.981106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.981287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.981319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.981557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.981593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.981794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.981830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.981995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.982029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.982248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.982295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.982544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.982597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.982811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.982862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.983071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.983106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.983288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.983321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.983548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.983584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.983832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.983868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.984041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.984110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.984263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.984295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.984504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.984539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.984701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.984738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.984936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.984973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.985217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.985265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.985473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.985509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.985741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.985792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.985979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.986012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.986202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.986237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.986429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.986479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.986678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.986727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.486 qpair failed and we were unable to recover it. 00:36:18.486 [2024-07-26 16:41:37.986891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.486 [2024-07-26 16:41:37.986923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.987085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.987119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.987280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.987330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.987560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.987613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.987806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.987857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.988056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.988107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.988272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.988322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.988546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.988597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.988796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.988846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.989022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.989067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.989294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.989345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.989572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.989623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.989819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.989870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.990042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.990082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.990284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.990317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.990544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.990580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.990774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.990825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.991022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.991055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.991239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.991272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.991510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.991560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.991732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.991782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.991936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.991968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.992198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.992250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.992483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.992535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.992745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.992783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.992971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.993007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.993241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.993275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.993573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.993628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.993818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.993853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.994048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.994090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.994272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.994305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.994558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.994608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.994804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.994839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.995033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.995078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.995269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.995301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.995498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.995533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.995703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.995739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.487 [2024-07-26 16:41:37.995931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.487 [2024-07-26 16:41:37.995967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.487 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.996194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.996241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.996459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.996513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.996698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.996750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.996941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.996991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.997173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.997207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.997406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.997458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.997695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.997733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.997928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.997964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.998161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.998197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.998352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.998388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.998582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.998618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.998803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.998839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.999024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.999064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.999242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.999274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.999478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.999528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.999771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:37.999822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:37.999999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.000034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.000262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.000295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.000467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.000517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.000720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.000771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.000946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.000978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.001180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.001230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.001470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.001522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.001753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.001804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.001954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.001986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.002190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.002242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.002476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.002527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.002697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.002751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.002904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.002937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.003175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.003228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.003373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.003404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.003613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.003646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.003821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.003854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.004030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.004077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.004239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.004289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.004474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.004511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.004738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.488 [2024-07-26 16:41:38.004771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.488 qpair failed and we were unable to recover it. 00:36:18.488 [2024-07-26 16:41:38.004950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.004982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.005212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.005269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.005496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.005546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.005752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.005802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.005983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.006015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.006220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.006270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.006450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.006499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.006704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.006754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.006927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.006959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.007182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.007234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.007460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.007511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.007705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.007756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.007930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.007963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.008143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.008194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.008406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.008456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.008663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.008723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.008905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.008939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.009090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.009122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.009333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.009389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.009589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.009638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.009837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.009868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.010053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.010090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.010292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.010350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.010576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.010626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.010799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.010852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.011029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.011072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.011242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.011293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.011490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.011541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.011743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.011793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.011967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.012001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.012250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.012301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.012512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.012549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.489 qpair failed and we were unable to recover it. 00:36:18.489 [2024-07-26 16:41:38.012717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.489 [2024-07-26 16:41:38.012750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.012951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.012983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.013205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.013239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.013437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.013487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.013700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.013734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.013910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.013943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.014155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.014206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.014387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.014437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.014662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.014713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.014862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.014901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.015140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.015192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.015404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.015438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.015637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.015669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.015851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.015885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.016070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.016104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.016313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.016365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.016593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.016644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.016841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.016891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.017133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.017185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.017412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.017462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.017648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.017698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.017908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.017941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.018156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.018208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.018388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.018429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.018627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.018677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.018862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.018895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.019101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.019152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.020192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.020245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.020442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.020495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.020689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.020723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.020877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.020911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.021120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.021158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.021351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.021406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.021616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.021649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.021861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.021893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.022094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.022126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.022346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.022395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.022603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.490 [2024-07-26 16:41:38.022652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.490 qpair failed and we were unable to recover it. 00:36:18.490 [2024-07-26 16:41:38.022856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.022888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.023066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.023099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.023282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.023335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.023557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.023608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.023767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.023811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.024052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.024100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.024280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.024329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.024563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.024615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.024840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.024874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.025082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.025126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.025331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.025386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.025593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.025649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.025825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.025876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.026022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.026055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.026271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.026322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.026505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.026556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.026736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.026787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.027007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.027040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.027227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.027277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.027453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.027504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.027706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.027756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.027962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.027994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.028199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.028252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.028487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.028539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.028736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.028787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.028969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.029002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.029230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.029282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.029515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.029566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.029781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.029832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.030020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.030053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.030259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.030319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.030548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.030606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.030855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.030894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.031077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.031110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.031281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.031317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.031548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.031585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.031800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.031836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.491 qpair failed and we were unable to recover it. 00:36:18.491 [2024-07-26 16:41:38.032038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.491 [2024-07-26 16:41:38.032086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.032253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.032288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.032495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.032546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.032763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.032815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.033030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.033082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.033277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.033310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.033508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.033558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.033759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.033809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.034008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.034051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.034265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.034297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.034479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.034530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.034762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.034813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.034957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.034990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.035200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.035233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.035436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.035492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.035732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.035788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.035997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.036034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.036247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.036280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.036450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.036486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.036650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.036686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.036874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.036910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.037124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.037157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.037359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.037411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.037636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.037686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.037875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.037925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.038121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.038154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.038375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.038426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.038661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.038712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.038901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.038933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.039166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.039217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.039442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.039503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.039712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.039762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.039963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.039996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.040209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.040260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.040465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.040515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.040697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.040749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.040899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.040933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.492 [2024-07-26 16:41:38.041167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.492 [2024-07-26 16:41:38.041220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.492 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.041425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.041475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.041679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.041729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.041932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.041964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.042192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.042244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.042445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.042496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.042743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.042794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.042995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.043028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.043233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.043285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.043541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.043597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.043808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.043860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.044048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.044097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.044298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.044352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.044545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.044596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.044797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.044847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.045027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.045068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.045267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.045319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.045546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.045616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.045819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.045858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.046024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.046080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.046300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.046332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.046550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.046587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.046774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.046810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.046980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.047012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.047195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.047228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.047429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.047465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.047690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.047726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.047920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.047956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.048160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.048194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.048367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.048399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.048571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.048604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.048820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.048857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.049024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.493 [2024-07-26 16:41:38.049056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.493 qpair failed and we were unable to recover it. 00:36:18.493 [2024-07-26 16:41:38.049221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.049253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.049442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.049478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.049729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.049765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 EAL: No free 2048 kB hugepages reported on node 1 00:36:18.494 [2024-07-26 16:41:38.049940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.049978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.050176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.050209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.050451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.050517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.050708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.050761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.050964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.050997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.051185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.051218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.051442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.051478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.051707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.051759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.051977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.052011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.052199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.052233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.052405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.052440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.052687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.052738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.052941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.052976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.053164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.053216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.053446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.053498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.053701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.053738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.053913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.053947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.054112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.054150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.054364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.054415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.054590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.054641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.054826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.054859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.055052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.055092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.055284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.055336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.055584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.055635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.055809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.055863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.056052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.056092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.056319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.056370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.056573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.056624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.056788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.056838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.056985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.057018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.057233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.057284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.057531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.057583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.057771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.057823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.058029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.494 [2024-07-26 16:41:38.058076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.494 qpair failed and we were unable to recover it. 00:36:18.494 [2024-07-26 16:41:38.058275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.058332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.058562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.058615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.058825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.058864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.059064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.059117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.059322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.059359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.059577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.059615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.059828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.059864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.060037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.060079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.060287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.060319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.060556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.060593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.060812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.060848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.061028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.061068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.061215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.061248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.061392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.061425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.061588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.061621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.061769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.061802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.061979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.062012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.062171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.062205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.062351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.062384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.062541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.062573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.062747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.062780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.062927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.062960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.063140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.063174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.063342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.063375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.063572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.063605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.063781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.063815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.064006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.064039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.064227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.064264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.064425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.064458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.064609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.064642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.064813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.064846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.065011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.065050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.065206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.065239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.065439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.065471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.065624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.065656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.065839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.065873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.066039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.066084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.495 [2024-07-26 16:41:38.066224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-07-26 16:41:38.066257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.495 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.066436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.066468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.066632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.066665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.066827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.066860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.067051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.067091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.067274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.067307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.067475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.067507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.067661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.067721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.067935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.067967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.068141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.068175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.068346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.068378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.068558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.068591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.068765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.068798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.068944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.068976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.069149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.069182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.069332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.069373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.069571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.069604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.069783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.069815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.069984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.070016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.070195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.070228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.070410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.070442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.070589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.070623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.070797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.070830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.071003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.071045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.071233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.071281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.071470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.071505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.071692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.071726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.071880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.071913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.072101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.072134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.072293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.072325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.072507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.072550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.072715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.072749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.072905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.072937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.073138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.073171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.073326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.073362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.073533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.073566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.073739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.073771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.073930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-07-26 16:41:38.073963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.496 qpair failed and we were unable to recover it. 00:36:18.496 [2024-07-26 16:41:38.074114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.074147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.074305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.074342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.074493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.074526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.074674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.074707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.074874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.074906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.075090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.075123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.075295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.075327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.075538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.075570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.075774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.075807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.075954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.075986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.076132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.076165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.076322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.076356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.076551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.076584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.076730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.076763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.076922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.076960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.077166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.077200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.077373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.077406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.077585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.077618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.077786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.077818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.077990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.078023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.078193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.078226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.078407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.078440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.078584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.078616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.078815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.078848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.078993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.079026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.079232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.079280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.079480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.079514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.079696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.079730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.079909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.079943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.080165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.080199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.080373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.080404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.080580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.080612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.080772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.080810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.081025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.081065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.081244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.081277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.081462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.081496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.081646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-07-26 16:41:38.081679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.497 qpair failed and we were unable to recover it. 00:36:18.497 [2024-07-26 16:41:38.081853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.081901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.082093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.082128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.082317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.082361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.082559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.082607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.082769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.082804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.082999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.083033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.083230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.083265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.083469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.083503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.083644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.083677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.083867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.083901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.084081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.084125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.084302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.084336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.084518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.084550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.084702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.084735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.084909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.084942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.085133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.085169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.085379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.085413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.085593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.085626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.085796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.085828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.085996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.086029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.086181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.086214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.086383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.086417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.086601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.086634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.086775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.086808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.086986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.087020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.087181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.087215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.087393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.087426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.087619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.087651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.087857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.087890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.088045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.088085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.088290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.088325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.088508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.088542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.088727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.088771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.498 qpair failed and we were unable to recover it. 00:36:18.498 [2024-07-26 16:41:38.088915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.498 [2024-07-26 16:41:38.088947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.089151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.089185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.089359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.089395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.089537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.089569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.089710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.089744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.089911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.089944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.090138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.090186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.090376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.090413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.090590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.090623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.090770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.090803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.091006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.091039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.091218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.091251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.091423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.091456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.091604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.091638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.091835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.091867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.092015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.092050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.092235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.092268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.092443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.092476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.092625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.092658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.092838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.092871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.093043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.093082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.093262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.093295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.093500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.093533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.093708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.093740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.093894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.093927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.094103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.094136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.094349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.094382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.094536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.094568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.094718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.094751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.094935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.094968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.095146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.095179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.095336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.095368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.095541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.095573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.095728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.095762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.095939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.095972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.096151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.096184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.096335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.096368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.096514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.499 [2024-07-26 16:41:38.096547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.499 qpair failed and we were unable to recover it. 00:36:18.499 [2024-07-26 16:41:38.096744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.096776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.096946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.096979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.097167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.097201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.097378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.097411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.097609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.097646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.097826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.097859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.098000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.098033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.098192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.098225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.098397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.098429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.098630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.098663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.098844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.098877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.099052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.099092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.099234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.099267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.099455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.099501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.099759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.099795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.099970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.100003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.100196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.100231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.100407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.100441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.100632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.100665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.100841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.100875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.101106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.101140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.101316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.101349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.101503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.101537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.101692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.101725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.101899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.101932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.102135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.102167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.102316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.102349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.102532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.102564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.102734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.102766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.102981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.103014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.103165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.103198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.103373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.103406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.103581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.103614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.103816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.103849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.104047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.104088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.104269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.104302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.104505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.104538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.500 [2024-07-26 16:41:38.104741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.500 [2024-07-26 16:41:38.104775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.500 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.104975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.105008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.105187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.105221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.105399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.105432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.105614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.105646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.105823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.105856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.106028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.106065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.106245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.106282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.106458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.106490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.106642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.106675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.106857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.106890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.107070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.107103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.107275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.107323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.107514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.107549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.107716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.107750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.107951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.107991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.108149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.108183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.108333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.108366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.108552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.108585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.108765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.108798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.108970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.109004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.109169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.109213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.109394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.109427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.109603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.109636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.109806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.109838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.109982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.110014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.110193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.110226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.110442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.110475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.110655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.110687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.110837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.110869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.111045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.111086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.111246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.111279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.111454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.111486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.111657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.111689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.111872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.111905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.112076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.112109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.112295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.112327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.112469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.501 [2024-07-26 16:41:38.112502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.501 qpair failed and we were unable to recover it. 00:36:18.501 [2024-07-26 16:41:38.112706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.112738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.112923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.112957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.113138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.113170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.113347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.113379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.113523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.113556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.113734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.113766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.113910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.113942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.114135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.114183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.114374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.114409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.114587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.114627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.114776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.114810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.114989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.115022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.115195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.115228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.115405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.115437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.115585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.115618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.115792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.115824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.115972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.116004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.116198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.116232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.116434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.116466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.116634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.116667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.116842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.116875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.117076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.117109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.117287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.117319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.117445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:18.502 [2024-07-26 16:41:38.117505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.117539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.117697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.117729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.117877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.117915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.118090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.118123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.118301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.118334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.118501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.118533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.118707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.118740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.118947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.118980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.119176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.119223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.119389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.119425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.119595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.119628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.119833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.119866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.502 [2024-07-26 16:41:38.120045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.502 [2024-07-26 16:41:38.120085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.502 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.120271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.120303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.120500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.120532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.120733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.120767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.120938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.120970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.121126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.121160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.121339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.121372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.121545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.121579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.121756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.121788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.121977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.122010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.122191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.122224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.122398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.122430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.122634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.122667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.122807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.122839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.123032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.123091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.123273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.123310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.123481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.123522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.123729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.123762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.123933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.123967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.124150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.124185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.124364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.124397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.124587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.124619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.124771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.124814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.124974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.125007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.125168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.125201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.125410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.125442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.125593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.125627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.125807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.125845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.126053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.126092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.126271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.126304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.126468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.126502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.126704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.126737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.126918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.126950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.127228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.127262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.127451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.503 [2024-07-26 16:41:38.127483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.503 qpair failed and we were unable to recover it. 00:36:18.503 [2024-07-26 16:41:38.127671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.127703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.127860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.127892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.128077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.128110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.128264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.128296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.128483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.128517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.128690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.128723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.128880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.128913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.129105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.129138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.129342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.129373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.129546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.129579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.129741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.129775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.129923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.129956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.130158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.130191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.130393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.130425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.130575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.130608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.130860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.130893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.131074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.131108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.131287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.131320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.131479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.131512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.131717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.131750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.131896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.131928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.132103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.132142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.132294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.132327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.132512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.132545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.132746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.132778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.132958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.132990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.133149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.133183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.133329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.133371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.133558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.133590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.133753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.133786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.133986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.134018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.134230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.134263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.134452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.134490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.134692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.134725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.134905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.134940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.135108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.135148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.135328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.135373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.504 [2024-07-26 16:41:38.135551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.504 [2024-07-26 16:41:38.135586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.504 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.135743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.135777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.135956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.135990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.136175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.136208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.136412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.136446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.136598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.136633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.136814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.136849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.137000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.137033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.137232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.137266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.137493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.137527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.137710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.137745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.137947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.137981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.138132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.138165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.138349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.138382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.138561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.138622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.138828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.138862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.139082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.139124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.139355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.139404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.139589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.139623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.139808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.139842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.139994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.140029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.140262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.140296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.140500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.140535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.140684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.140718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.140899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.140933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.141074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.141118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.141290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.141323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.141536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.141570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.141752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.141786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.141936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.141970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.142192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.142226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.142426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.142459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.142707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.142739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.142928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.142962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.143122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.143157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.143359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.143397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.143574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.143608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.143781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.143816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.505 qpair failed and we were unable to recover it. 00:36:18.505 [2024-07-26 16:41:38.144021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.505 [2024-07-26 16:41:38.144054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.144221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.144255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.144430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.144466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.144647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.144680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.144851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.144885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.145111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.145146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.145348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.145390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.145604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.145655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.145846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.145883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.146090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.146132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.146318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.146362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.146553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.146588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.146773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.146808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.147005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.147049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.147226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.147262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.147429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.147476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.147672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.147707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.147894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.147928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.148082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.148125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.148271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.148304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.148503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.148550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.148711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.148745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.148946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.148980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.149138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.149173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.149363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.149414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.149638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.149676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.149896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.149931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.150119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.150155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.150309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.150353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.150570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.150605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.150810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.150852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.151038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.151082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.151237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.151270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.151459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.151493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.151646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.151681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.151847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.151885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.152056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.152110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.506 [2024-07-26 16:41:38.152294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.506 [2024-07-26 16:41:38.152331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.506 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.152558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.152609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.152800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.152836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.153043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.153088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.153283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.153317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.153500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.153534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.153695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.153731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.153891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.153927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.154095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.154131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.154310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.154345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.154522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.154564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.154772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.154806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.154956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.154989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.155199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.155235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.155424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.155459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.155664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.155713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.155905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.155941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.156149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.156185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.156363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.156398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.156580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.156615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.156763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.156797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.156959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.156996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.157218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.157267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.157485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.157522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.157711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.157750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.157926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.157964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.158125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.158164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.158359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.158408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.158594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.158630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.158812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.158846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.159021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.159076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.159243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.159281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.159489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.507 [2024-07-26 16:41:38.159524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.507 qpair failed and we were unable to recover it. 00:36:18.507 [2024-07-26 16:41:38.159698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.159733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.159885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.159919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.160105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.160140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.160313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.160355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.160540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.160574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.160718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.160752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.160945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.160980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.161166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.161215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.161409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.161445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.161633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.161667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.161873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.161908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.162086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.162121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.162292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.162341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.162501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.162538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.162721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.162756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.162907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.162941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.163133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.163182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.163376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.163420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.163578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.163613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.163785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.163819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.163967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.164000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.164177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.164249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.164408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.164443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.164620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.164654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.164826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.164861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.165078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.165113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.165305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.165339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.165518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.165553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.165734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.165768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.165948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.165982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.166144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.166178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.166352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.166386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.166558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.166592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.166771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.166805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.166949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.166986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.167170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.167204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.167415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.167450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.508 [2024-07-26 16:41:38.167643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.508 [2024-07-26 16:41:38.167676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.508 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.167826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.167859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.168037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.168077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.168230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.168263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.168409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.168442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.168625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.168659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.168811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.168845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.169043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.169082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.169232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.169265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.169418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.169451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.169590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.169623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.169770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.169804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.170003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.170037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.170223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.170257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.170436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.170470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.170671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.170704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.170853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.170886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.171083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.171152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.171309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.171360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.171573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.171610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.171764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.171806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.171963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.171997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.172174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.172210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.172392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.172426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.172578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.172612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.172790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.172823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.172998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.173032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.173213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.173246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.173395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.173428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.173573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.173606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.173764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.173798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.173979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.174012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.174205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.174242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.174461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.174510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.174727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.174766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.174947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.174986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.175188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.175227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.175415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.509 [2024-07-26 16:41:38.175454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.509 qpair failed and we were unable to recover it. 00:36:18.509 [2024-07-26 16:41:38.175639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.175674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.175850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.175884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.176032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.176077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.176279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.176327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.176518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.176555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.176703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.176738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.176913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.176947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.177151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.177187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.177343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.177376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.177533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.177575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.177762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.177799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.177964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.177998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.178179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.178213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.178399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.178433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.178633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.178666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.178847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.178880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.179079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.179113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.179294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.179327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.179477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.179511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.179685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.179719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.179916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.179950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.180123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.180172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.180363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.180399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.180583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.180622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.180831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.180866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.181017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.181051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.181276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.181314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.181497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.181531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.181707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.181742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.181945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.181979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.182168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.182216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.182400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.182435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.182596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.182632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.182823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.182857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.183030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.183088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.183259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.183311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.183511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.183571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.510 qpair failed and we were unable to recover it. 00:36:18.510 [2024-07-26 16:41:38.183780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.510 [2024-07-26 16:41:38.183814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.184014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.184049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.184233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.184273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.184456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.184493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.184642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.184677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.184838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.184872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.185021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.185074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.185264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.185298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.185489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.185524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.185698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.185732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.185904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.185939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.186113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.186147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.186303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.186337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.186523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.186557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.186763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.186798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.186963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.186998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.187210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.187246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.187433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.187468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.187642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.187677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.187854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.187888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.188067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.188102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.188260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.188295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.188500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.188534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.188706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.188740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.188937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.188972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.189120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.189153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.189306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.189339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.189496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.189529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.189680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.189715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.189893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.189928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.190117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.190151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.190322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.190365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.190537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.190581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.190730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.190764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.190948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.190982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.191163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.191197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.191343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.191386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.191547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.511 [2024-07-26 16:41:38.191581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.511 qpair failed and we were unable to recover it. 00:36:18.511 [2024-07-26 16:41:38.191758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.191805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.192005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.192040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.192240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.192274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.192448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.192483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.192667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.192705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.192877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.192912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.193130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.193172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.193341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.193382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.193579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.193613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.193789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.193824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.194013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.194048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.194266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.194300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.194483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.194518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.194673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.194707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.194863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.194898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.195054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.195094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.195301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.195335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.195487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.195521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.195701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.195735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.195887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.195933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.196149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.196184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.196330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.196364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.196557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.196592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.196768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.196803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.196959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.196993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.197195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.197245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.197433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.197480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.197637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.197673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.197886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.197923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.198085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.198123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.198308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.198359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.198576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.198611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.198813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.512 [2024-07-26 16:41:38.198848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.512 qpair failed and we were unable to recover it. 00:36:18.512 [2024-07-26 16:41:38.199023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.199057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.199256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.199291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.199443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.199477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.199649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.199683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.199884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.199919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.200123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.200159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.200334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.200368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.200549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.200584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.200761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.200806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.200988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.201023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.201179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.201213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.201390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.201428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.201615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.201650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.201856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.201890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.202033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.202076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.202258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.202292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.202475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.202513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.202717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.202759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.202919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.202955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.203139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.203178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.203370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.203417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.203577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.203612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.203797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.203833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.203986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.204020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.204243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.204277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.204431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.204466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.204653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.204687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.204860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.204894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.205049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.205090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.205259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.205293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.205486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.205520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.205672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.205707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.205889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.205923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.206103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.206137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.206313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.206356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.206557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.206592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.206765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.206799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.206973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.513 [2024-07-26 16:41:38.207007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.513 qpair failed and we were unable to recover it. 00:36:18.513 [2024-07-26 16:41:38.207224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.207258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.207406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.207441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.207650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.207688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.207864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.207901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.208099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.208141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.208344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.208388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.208551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.208586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.208737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.208772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.208935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.208973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.209187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.209221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.209424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.209458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.209659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.209693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.209900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.209934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.210102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.210141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.210289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.210324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.210515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.210550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.210723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.210758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.210934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.210969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.211178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.211212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.211465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.211499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.211647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.211681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.211869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.211904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.212083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.212127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.212305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.212339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.212491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.212525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.212728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.212762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.212966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.212999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.213172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.213206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.213404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.213438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.213692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.213726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.213874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.213907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.214160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.214195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.214446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.214480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.214663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.214698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.214851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.214885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.215047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.215087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.215264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.215298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.514 qpair failed and we were unable to recover it. 00:36:18.514 [2024-07-26 16:41:38.215480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.514 [2024-07-26 16:41:38.215515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.215694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.215728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.215915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.215949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.216132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.216167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.216319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.216361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.216541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.216575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.216753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.216787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.216944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.216978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.217121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.217156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.217322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.217370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.217551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.217585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.217752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.217787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.217962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.217996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.218211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.218245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.218422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.218456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.218606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.218640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.218793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.218844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.219010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.219044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.219266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.219300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.219456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.219491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.219669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.219703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.219853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.219907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.220086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.220126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.220275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.220309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.220515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.220549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.220694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.220727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.220902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.220936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.221122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.221156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.221331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.221364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.221515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.221548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.221707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.221740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.221913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.221947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.222133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.222167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.222352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.222386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.222560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.222594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.222739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.222774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.222955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.222989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.515 [2024-07-26 16:41:38.223170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.515 [2024-07-26 16:41:38.223203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.515 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.223355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.223388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.223587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.223621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.223788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.223822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.223971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.224005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.224189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.224223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.224404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.224438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.224612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.224646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.224794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.224834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.225029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.225107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.225358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.225402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.225647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.225689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.225842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.225885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.226075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.226120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.226328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.226375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.226554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.226589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.226749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.226785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.227003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.227039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.227213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.227250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.227447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.227516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.227713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.227747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.227898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.227932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.228087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.228126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.228276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.228310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.228500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.228532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.228679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.228711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.228884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.228915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.229101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.229134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.229284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.229316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.229503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.229535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.516 [2024-07-26 16:41:38.229693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.516 [2024-07-26 16:41:38.229724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.516 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.229899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.229932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.230098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.230131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.230288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.230320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.230485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.230517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.230690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.230722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.230871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.230903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.231076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.231116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.231269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.231303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.231459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.231492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.231648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.231682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.231837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.231871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.232048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.232105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.232262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.232296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.232475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.232508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.232663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.232707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.232854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.232888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.233065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.784 [2024-07-26 16:41:38.233110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.784 qpair failed and we were unable to recover it. 00:36:18.784 [2024-07-26 16:41:38.233257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.233291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.233438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.233472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.233631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.233664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.233808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.233842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.234013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.234047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.234219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.234252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.234402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.234436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.234594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.234627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.234777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.234811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.234957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.234991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.235146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.235180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.235337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.235375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.235547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.235581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.235832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.235865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.236017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.236051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.236315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.236348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.236549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.236582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.236733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.236766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.236918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.236951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.237152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.237187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.237339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.237372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.237549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.237583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.237733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.237767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.237916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.237951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.238140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.238174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.238354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.238387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.238569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.238602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.238784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.238817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.238961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.238994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.239148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.239183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.239334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.239368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.239568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.239601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.239777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.239811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.240071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.240105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.240358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.240392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.240639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.240671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.240848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.240883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.785 qpair failed and we were unable to recover it. 00:36:18.785 [2024-07-26 16:41:38.241088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.785 [2024-07-26 16:41:38.241123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.241332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.241366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.241565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.241599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.241777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.241810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.241986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.242020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.242199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.242233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.242425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.242458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.242668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.242702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.242882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.242915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.243092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.243133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.243309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.243343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.243517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.243550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.243729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.243762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.243936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.243970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.244139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.244180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.244434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.244467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.244726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.244759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.244975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.245010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.245186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.245221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.245375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.245423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.245661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.245694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.245872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.245906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.246070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.246105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.246313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.246347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.246544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.246587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.246848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.246882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.247053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.247094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.247249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.247283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.247443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.247477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.247687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.247721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.247894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.247928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.248109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.248144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.248313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.248346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.248527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.248560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.248697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.248731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.248888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.248921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.249078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.786 [2024-07-26 16:41:38.249122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.786 qpair failed and we were unable to recover it. 00:36:18.786 [2024-07-26 16:41:38.249301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.249334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.249510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.249543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.249743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.249776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.249951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.249985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.250142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.250177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.250436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.250470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.250682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.250715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.250868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.250902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.251157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.251191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.251361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.251394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.251591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.251624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.251781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.251815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.252010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.252044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.252259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.252293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.252461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.252495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.252699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.252733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.252913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.252946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.253150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.253189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.253363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.253397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.253568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.253601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.253753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.253787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.253953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.253987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.254187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.254221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.254398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.254432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.254632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.254666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.254873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.254907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.255111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.255145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.255322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.255355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.255524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.255558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.255730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.255763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.255952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.255986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.256193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.256227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.256403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.256437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.256689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.256723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.256922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.256955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.257102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.257136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.257306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.257339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.787 [2024-07-26 16:41:38.257515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.787 [2024-07-26 16:41:38.257549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.787 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.257727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.257760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.257944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.257977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.258175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.258209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.258368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.258402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.258551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.258585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.258774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.258808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.258984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.259018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.259180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.259214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.259418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.259452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.259623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.259656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.259865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.259899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.260074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.260108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.260258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.260291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.260468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.260511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.260690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.260723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.260921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.260954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.261140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.261174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.261385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.261419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.261576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.261609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.261776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.261814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.262017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.262051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.262282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.262315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.262502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.262536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.262687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.262720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.262876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.262910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.263073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.263117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.263272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.263304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.263465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.263499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.263708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.263742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.263911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.263945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.264137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.264170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.264379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.264421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.264596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.264630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.264810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.264844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.265044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.265085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.265247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.265279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.265462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.788 [2024-07-26 16:41:38.265497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.788 qpair failed and we were unable to recover it. 00:36:18.788 [2024-07-26 16:41:38.265640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.265674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.265876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.265910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.266120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.266154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.266294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.266326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.266506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.266539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.266718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.266751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.266934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.266968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.267179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.267213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.267425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.267459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.267646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.267680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.267854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.267888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.268118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.268169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.268365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.268403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.268582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.268618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.268772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.268807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.268971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.269006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.269193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.269227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.269376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.269411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.269584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.269619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.269805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.269839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.270006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.270056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.270232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.270267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.270447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.270482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.270677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.270712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.270903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.270938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.271227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.271262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.271423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.271457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.271639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.271673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.271853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.271888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.272087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.272128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.272302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.272335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.789 [2024-07-26 16:41:38.272526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.789 [2024-07-26 16:41:38.272560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.789 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.272745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.272779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.272987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.273020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.273205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.273239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.273442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.273492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.273716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.273762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.273941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.273975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.274178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.274212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.274426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.274459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.274616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.274666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.274868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.274902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.275071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.275118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.275323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.275368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.275519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.275554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.275768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.275802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.275989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.276024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.276216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.276264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.276465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.276501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.276690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.276729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.276910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.276944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.277124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.277159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.277341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.277385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.277554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.277589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.277791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.277825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.277984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.278018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.278212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.278246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.278437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.278471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.278646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.278679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.278872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.278906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.279049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.279112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.279294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.279341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.279500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.279534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.279721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.279755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.279906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.279940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.280115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.280149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.280296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.280330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.280513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.280547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.790 [2024-07-26 16:41:38.280705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.790 [2024-07-26 16:41:38.280739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.790 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.280940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.280974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.281237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.281271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.281435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.281469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.281618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.281652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.281835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.281869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.282042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.282100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.282275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.282309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.282502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.282536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.282711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.282745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.282949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.282982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.283182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.283217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.283371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.283405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.283613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.283647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.283804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.283839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.284037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.284079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.284252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.284285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.284462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.284496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.284695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.284729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.284871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.284905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.285163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.285197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.285394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.285449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.285660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.285697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.285866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.285901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.286075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.286118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.286300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.286333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.286519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.286555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.286735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.286770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.286929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.286963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.287149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.287184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.287350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.287384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.287523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.287557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.287703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.287738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.287942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.287976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.288234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.288267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.288485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.288520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.288724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.288758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.288916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.791 [2024-07-26 16:41:38.288951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.791 qpair failed and we were unable to recover it. 00:36:18.791 [2024-07-26 16:41:38.289144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.289178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.289358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.289392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.289564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.289598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.289796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.289830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.290013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.290048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.290209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.290243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.290422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.290456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.290602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.290636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.290794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.290829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.291002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.291036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.291252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.291300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.291502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.291539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.291700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.291734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.291939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.291973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.292176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.292211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.292395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.292429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.292574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.292610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.292816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.292851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.293026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.293067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.293236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.293269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.293449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.293482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.293654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.293688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.293854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.293888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.294084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.294124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.294306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.294339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.294497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.294531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.294716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.294749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.294903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.294937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.295116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.295151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.295326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.295366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.295543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.295577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.295753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.295808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.296031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.296070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.296263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.296296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.296476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.296510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.296711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.296745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.792 qpair failed and we were unable to recover it. 00:36:18.792 [2024-07-26 16:41:38.296947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.792 [2024-07-26 16:41:38.296981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.297168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.297203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.297381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.297415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.297616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.297651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.297829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.297864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.298043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.298083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.298271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.298305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.298522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.298556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.298697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.298731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.298877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.298912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.299082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.299124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.299282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.299316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.299505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.299540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.299732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.299766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.299946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.299980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.300124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.300158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.300363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.300413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.300584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.300621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.300779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.300814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.301004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.301038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.301218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.301263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.301452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.301487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.301636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.301670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.301874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.301909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.302106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.302141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.302320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.302364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.302546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.302580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.302755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.302794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.302997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.303031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.303193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.303227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.303410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.303445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.303650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.303684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.793 qpair failed and we were unable to recover it. 00:36:18.793 [2024-07-26 16:41:38.303841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.793 [2024-07-26 16:41:38.303877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.304068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.304112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.304261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.304294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.304481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.304514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.304687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.304721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.304868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.304901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.305085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.305128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.305314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.305347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.305518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.305553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.305764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.305798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.305971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.306006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.306169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.306204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.306383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.306418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.306573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.306609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.306811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.306845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.307023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.307057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.307250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.307284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.307474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.307507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.307684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.307718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.307898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.307931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.308147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.308181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.308337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.308370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.308553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.308587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.308736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.308770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.308975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.309010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.309192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.309226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.309369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.309403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.309558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.309594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.309768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.309803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.309961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.309995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.310181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.310215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.310392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.310426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.310632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.310666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.310814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.310848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.311005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.311039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.311227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.311265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.311472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.311506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.311686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.794 [2024-07-26 16:41:38.311720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.794 qpair failed and we were unable to recover it. 00:36:18.794 [2024-07-26 16:41:38.311897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.311931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.312082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.312126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.312281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.312315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.312487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.312521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.312698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.312732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.312916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.312950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.313107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.313140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.313286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.313320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.313474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.313510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.313671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.313705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.313917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.313951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.314115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.314150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.314290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.314323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.314524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.314558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.314741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.314784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.314957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.314991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.315176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.315210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.315388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.315422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.315596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.315629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.315817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.315851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.316024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.316064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.316282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.316317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.316537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.316572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.316728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.316762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.316915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.316951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.317163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.317197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.317368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.317403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.317546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.317580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.317760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.317795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.317942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.317977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.318181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.318216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.318390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.318424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.318597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.318632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.318780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.318815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.319010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.319045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.319259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.319310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.319538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.319587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.795 qpair failed and we were unable to recover it. 00:36:18.795 [2024-07-26 16:41:38.319772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.795 [2024-07-26 16:41:38.319814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.319990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.320024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.320212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.320246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.320425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.320459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.320639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.320673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.320874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.320908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.321119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.321154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.321299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.321333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.321505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.321540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.321689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.321723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.321874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.321909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.322072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.322108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.322267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.322302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.322477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.322511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.322694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.322728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.322931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.322966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.323116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.323151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.323342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.323392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.323604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.323642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.323787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.323822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.323994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.324029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.324224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.324258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.324465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.324499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.324704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.324738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.324959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.324993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.325167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.325213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.325367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.325402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.325554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.325589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.325744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.325779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.325932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.325965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.326167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.326200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.326380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.326414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.326618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.326652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.326851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.326886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.327068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.327104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.327287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.327336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.327551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.327588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.796 [2024-07-26 16:41:38.327788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.796 [2024-07-26 16:41:38.327823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.796 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.327996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.328030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.328219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.328254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.328458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.328516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.328709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.328746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.328904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.328939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.329138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.329173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.329366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.329401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.329604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.329638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.329813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.329848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.329999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.330034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.330236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.330271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.330451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.330499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.330687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.330724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.330868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.330903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.331054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.331098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.331307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.331342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.331495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.331529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.331712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.331748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.331950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.331984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.332154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.332189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.332353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.332403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.332567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.332604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.332818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.332854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.333003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.333038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.333243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.333293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.333500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.333538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.333724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.333760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.333941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.333977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.334164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.334200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.334406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.334455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.334648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.334685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.334847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.334882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.335067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.335102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.335299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.335349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.335536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.335573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.335725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.335760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.797 qpair failed and we were unable to recover it. 00:36:18.797 [2024-07-26 16:41:38.335928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.797 [2024-07-26 16:41:38.335964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.336121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.336156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.336338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.336372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.336586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.336620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.336794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.336829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.337003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.337038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.337226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.337265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.337445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.337481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.337653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.337688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.337884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.337918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.338104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.338140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.338318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.338353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.338554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.338588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.338793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.338828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.339026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.339069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.339269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.339304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.339505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.339540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.339716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.339751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.339927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.339962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.340109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.340144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.340337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.340386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.340611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.340661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.340847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.340884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.798 [2024-07-26 16:41:38.341069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.798 [2024-07-26 16:41:38.341105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.798 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.341280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.341315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.341496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.341531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.341708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.341742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.341898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.341933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.342088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.342123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.342313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.342348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.342530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.342575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.342762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.342797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.342954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.342989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.343179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.343230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.343395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.343432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.343607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.343642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.343847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.343881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.344093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.344128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.344311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.344345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.344548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.344584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.344786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.344822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.345023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.345067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.345276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.345311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.345488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.345524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.345701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.345736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.345909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.345943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.346123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.346163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.346318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.346354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.346530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.346566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.346712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.346747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.346928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.346961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.347109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.347144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.347295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.347331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.347502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.347537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.347684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.347719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.347903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.347938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.348113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.348149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.348316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.348366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.348527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.348564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.348742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.348776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.348929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.799 [2024-07-26 16:41:38.348964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.799 qpair failed and we were unable to recover it. 00:36:18.799 [2024-07-26 16:41:38.349128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.349164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.349365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.349414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.349585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.349626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.349789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.349825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.350007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.350042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.350247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.350296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.350485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.350523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.350707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.350743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.350915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.350951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.351131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.351167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.351348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.351383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.351563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.351598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.351783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.351836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.352002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.352039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.352254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.352289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.352470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.352506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.352682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.352716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.352906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.352942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.353126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.353173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.353375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.353410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.353591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.353626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.353830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.353864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.354034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.354089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.354253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.354291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.354474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.354510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.354718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.354759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.354925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.354960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.355136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.355171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.355370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.355420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.800 [2024-07-26 16:41:38.355614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.800 [2024-07-26 16:41:38.355652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.800 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.355832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.355867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.356016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.356051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.356243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.356278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.356436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.356472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.356624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.356661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.356868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.356903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.357083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.357120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.357273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.357309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.357456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.357492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.357655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.357690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.357839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.357874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.358049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.358094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.358273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.358308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.358502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.358537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.358716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.358752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.358957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.358993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.359224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.359274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.359503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.359554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.359810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.359859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.360038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.360082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.360264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.360299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.360500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.360534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.360692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.360726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.360909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.360942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.361116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.361166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.361394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.361444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.361634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.361670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.361841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.361876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.362056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.362097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.362244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.362278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.362435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.362471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.362626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.362661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.362839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.362874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.363027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.363067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.363229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.363264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.363441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.801 [2024-07-26 16:41:38.363479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.801 qpair failed and we were unable to recover it. 00:36:18.801 [2024-07-26 16:41:38.363655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.363689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.363858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.363892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.364072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.364106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.364269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.364319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.364505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.364558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.364756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.364792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.364965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.365000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.365183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.365232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.365438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.365489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.365707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.365745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.365934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.365971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.366178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.366214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.366391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.366426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.366587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.366622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.366808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.366844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.367024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.367070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.367274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.367310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.367514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.367550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.367753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.367788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.368014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.368074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.368291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.368328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.368474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.368509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.368690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.368725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.368878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.368913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.369097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.369132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.369316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.369352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.369508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.369543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.369695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.369730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.369939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.369973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.802 [2024-07-26 16:41:38.370129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.802 [2024-07-26 16:41:38.370165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.802 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.370333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.370383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.370598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.370635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.370820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.370855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.371037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.371079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.371258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.371308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.371488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.371525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.371787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.371823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.372082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.372117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.372295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.372329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.372531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.372572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.372725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.372760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.372931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.372966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.373170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.373204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.373424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.373474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.373661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.373697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.373879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.373914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.374090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.374125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.374293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.374342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.374565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.374603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.374809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.374845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.375023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.375067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.375253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.375288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.375465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.375500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.375706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.375756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.375947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.375983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.376149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.376184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.376358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.376392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.376569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.376604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.376774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.376808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.377015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.377050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.377219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.377253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.803 [2024-07-26 16:41:38.377401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.803 [2024-07-26 16:41:38.377435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.803 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.377634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.377668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.377847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.377881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.378091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.378125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.378300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.378350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.378554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.378603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.378761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.378799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.378973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.379010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.379166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.379226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.379434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.379468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.379667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.379701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.379888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.379923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.380118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.380167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.380341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.380391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.380555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.380591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.380775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.380816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.380988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.381023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.381232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.381281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.381460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.381497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.381663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.381699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.381854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.381888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.382084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.382133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.382322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.382359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.382541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.382577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.382761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.382796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.382995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.383045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.383230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.383266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.383439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.383489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.383679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.383716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.383895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.383930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.804 [2024-07-26 16:41:38.384149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.804 [2024-07-26 16:41:38.384185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.804 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.384357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.384403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.384556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.384591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.384775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.384810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.384980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.385014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.385178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.385214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.385399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.385435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.385590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.385624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.385781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.385815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.385990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.386025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.386224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.386274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.386494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.386531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.386680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.386715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.386887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.386922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.387095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.387130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.387314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.387355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.387560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.387603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.387806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.387840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.387994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.388029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.388210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.388260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.388431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.388480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.388724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.388774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.388928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.388965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.389123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.389158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.389372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.389406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.389586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.389622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.389803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.389837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.389992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.390027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.390125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:18.805 [2024-07-26 16:41:38.390182] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:18.805 [2024-07-26 16:41:38.390232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the[2024-07-26 16:41:38.390232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 only 00:36:18.805 [2024-07-26 16:41:38.390261] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:18.805 [2024-07-26 16:41:38.390269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.805 [2024-07-26 16:41:38.390284] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.390406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:36:18.805 [2024-07-26 16:41:38.390457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:36:18.805 [2024-07-26 16:41:38.390490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.390496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:36:18.805 [2024-07-26 16:41:38.390480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:36:18.805 [2024-07-26 16:41:38.390553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.390735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.390769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.390952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.390985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.391139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.805 [2024-07-26 16:41:38.391175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.805 qpair failed and we were unable to recover it. 00:36:18.805 [2024-07-26 16:41:38.391350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.391384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.391566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.391601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.391754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.391788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.391967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.392001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.392179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.392214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.392369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.392404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.392568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.392604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.392787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.392822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.392994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.393028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.393193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.393228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.393384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.393419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.393615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.393649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.393821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.393855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.394119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.394154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.394311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.394346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.394530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.394604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.394750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.394785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.394958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.394992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.395178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.395212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.395386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.395420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.395583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.395617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.395771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.395806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.395964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.395997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.396162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.396197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.396354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.396389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.396578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.396612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.396816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.396851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.397001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.397036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.397204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.397238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.397407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.397442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.397699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.397734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.397914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.397948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.398110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.398149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.398329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.398364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.398536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.398571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.398745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.806 [2024-07-26 16:41:38.398780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.806 qpair failed and we were unable to recover it. 00:36:18.806 [2024-07-26 16:41:38.398925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.398959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.399122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.399157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.399341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.399375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.399555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.399589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.399740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.399774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.399953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.399988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.400197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.400232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.400385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.400419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.400583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.400618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.400824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.400858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.401001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.401036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.401247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.401299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.401507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.401545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.401699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.401735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.401913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.401949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.402127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.402163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.402349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.402387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.402564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.402600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.402777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.402813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.403001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.403036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.403259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.403295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.403448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.403483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.403660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.403695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.403844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.403888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.404075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.404112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.404264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.404306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.404520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.404555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.404728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.404764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.404962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.404997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.405175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.405212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.405380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.405415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.405620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.405655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.405816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.405851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.406018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.406053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.406245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.406280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.406432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.406466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.406671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.406710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.807 [2024-07-26 16:41:38.406880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.807 [2024-07-26 16:41:38.406915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.807 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.407086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.407121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.407288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.407322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.407524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.407558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.407714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.407749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.407911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.407945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.408150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.408185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.408337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.408372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.408550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.408584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.408746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.408781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.408927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.408961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.409139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.409174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.409366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.409401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.409556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.409591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.409841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.409875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.410052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.410092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.410282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.410333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.410516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.410554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.410732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.410767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.410988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.411024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.411206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.411243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.411429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.411465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.411687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.411722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.411881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.411925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.412113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.412148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.412347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.412389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.412601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.412652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.412844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.412894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.413169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.413206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.413359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.413395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.413584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.413619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.413826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.808 [2024-07-26 16:41:38.413861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.808 qpair failed and we were unable to recover it. 00:36:18.808 [2024-07-26 16:41:38.414021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.414057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.414252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.414286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.414478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.414513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.414666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.414700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.414898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.414934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.415108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.415143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.415325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.415360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.415578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.415618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.415782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.415838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.416019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.416054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.416223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.416267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.416454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.416498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.416680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.416716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.416872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.416907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.417086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.417121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.417270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.417304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.417487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.417527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.417704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.417739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.417920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.417954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.418109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.418145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.418321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.418356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.418540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.418573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.418766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.418800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.418962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.418996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.419173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.419207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.419383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.419416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.419567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.419601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.419777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.419811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.419995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.420030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.420247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.420280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.420427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.420461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.420603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.420637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.420855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.420890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.421074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.421113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.421267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.421301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.421453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.421487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.421630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.421664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.809 [2024-07-26 16:41:38.421844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.809 [2024-07-26 16:41:38.421878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.809 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.422033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.422076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.422257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.422291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.422472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.422507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.422680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.422714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.422876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.422911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.423095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.423130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.423274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.423308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.423451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.423485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.423640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.423674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.423830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.423868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.424073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.424107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.424263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.424296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.424469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.424503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.424652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.424686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.424897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.424932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.425089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.425134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.425309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.425343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.425543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.425576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.425750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.425783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.425958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.425991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.426161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.426195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.426365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.426399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.426574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.426607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.426775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.426809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.426984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.427019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.427189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.427223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.427377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.427412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.427604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.427638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.427793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.427827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.428033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.428076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.428232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.428267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.428417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.428450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.428600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.428634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.428818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.428853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.429013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.429066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.429246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.429280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.429476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.810 [2024-07-26 16:41:38.429525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.810 qpair failed and we were unable to recover it. 00:36:18.810 [2024-07-26 16:41:38.429682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.429719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.429868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.429903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.430067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.430116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.430324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.430359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.430543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.430578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.430725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.430759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.430905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.430939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.431112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.431147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.431322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.431357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.431531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.431571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.431718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.431752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.431890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.431923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.432080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.432121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.432297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.432330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.432496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.432530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.432700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.432734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.432909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.432943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.433136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.433171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.433332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.433365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.433564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.433599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.433774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.433809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.433967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.434017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.434205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.434242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.434399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.434433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.434586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.434619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.434820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.434868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.435018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.435052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.435218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.435252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.435425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.435458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.435614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.435648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.435794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.435828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.435967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.436000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.436169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.436203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.436348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.436382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.436568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.436602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.436754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.436788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.436967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.437003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.811 qpair failed and we were unable to recover it. 00:36:18.811 [2024-07-26 16:41:38.437197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.811 [2024-07-26 16:41:38.437232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.437373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.437408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.437585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.437620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.437796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.437830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.437998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.438031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.438219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.438252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.438397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.438431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.438585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.438618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.438791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.438823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.439020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.439054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.439246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.439280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.439427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.439460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.439603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.439637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.439781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.439815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.439988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.440020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.440194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.440236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.440396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.440429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.440607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.440640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.440815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.440848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.440997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.441031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.441202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.441251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.441422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.441459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.441646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.441692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.441861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.441896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.442079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.442114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.442295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.442329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.442539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.442574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.442717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.442751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.442933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.442968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.443149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.443185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.443343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.443385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.443569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.443602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.443780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.443813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.443960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.443994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.444161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.444195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.444395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.444445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.444620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.444657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.812 [2024-07-26 16:41:38.444843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.812 [2024-07-26 16:41:38.444888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.812 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.445037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.445078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.445232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.445265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.445419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.445453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.445624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.445658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.445837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.445874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.446029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.446073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.446218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.446252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.446442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.446476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.446642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.446676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.446825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.446860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.447071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.447106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.447294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.447329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.447503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.447538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.447691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.447726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.447880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.447916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.448134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.448171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 A controller has encountered a failure and is being reset. 00:36:18.813 [2024-07-26 16:41:38.448361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.448410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.448579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.448620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.448775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.448816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.449021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.449055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.449260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.449296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.449462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.449510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.449696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.449732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.449913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.449947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.450131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.450167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.450342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.450376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.450523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.450557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.450737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.450771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.450954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.450989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.451154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.451191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.451346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.451379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.813 [2024-07-26 16:41:38.451537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.813 [2024-07-26 16:41:38.451571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.813 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.451748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.451781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.451940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.451973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.452135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.452168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.452331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.452380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.452566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.452602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.452782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.452817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.452994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.453030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.453187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.453221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.453395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.453428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.453605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.453638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.453817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.453850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.454030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.454072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.454255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.454289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.454483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.454531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.454721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.454756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.454915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.454951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.455136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.455171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.455347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.455381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.455539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.455573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.455717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.455751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.455931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.455964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.456112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.456147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.456327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.456361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.456534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.456568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.456746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.456780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.456971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.457025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.457262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.457300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.457449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.457484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.457639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.457673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.457843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.457877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.458033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.458076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.458255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.458289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.458461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.458496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.458676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.458712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.458876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.458910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.459101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.459136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.459326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.814 [2024-07-26 16:41:38.459359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.814 qpair failed and we were unable to recover it. 00:36:18.814 [2024-07-26 16:41:38.459507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.459541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.815 qpair failed and we were unable to recover it. 00:36:18.815 [2024-07-26 16:41:38.459688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.459722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.815 qpair failed and we were unable to recover it. 00:36:18.815 [2024-07-26 16:41:38.459913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.459946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.815 qpair failed and we were unable to recover it. 00:36:18.815 [2024-07-26 16:41:38.460096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.460130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.815 qpair failed and we were unable to recover it. 00:36:18.815 [2024-07-26 16:41:38.460297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.460333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.815 qpair failed and we were unable to recover it. 00:36:18.815 [2024-07-26 16:41:38.460523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.460557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.815 qpair failed and we were unable to recover it. 00:36:18.815 [2024-07-26 16:41:38.460738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.460772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.815 qpair failed and we were unable to recover it. 00:36:18.815 [2024-07-26 16:41:38.460920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.460954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.815 qpair failed and we were unable to recover it. 00:36:18.815 [2024-07-26 16:41:38.461129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.461164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.815 qpair failed and we were unable to recover it. 00:36:18.815 [2024-07-26 16:41:38.461316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.461350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.815 qpair failed and we were unable to recover it. 00:36:18.815 [2024-07-26 16:41:38.461526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.461560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.815 qpair failed and we were unable to recover it. 00:36:18.815 [2024-07-26 16:41:38.461706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.461740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:18.815 qpair failed and we were unable to recover it. 00:36:18.815 [2024-07-26 16:41:38.461919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.461954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.815 qpair failed and we were unable to recover it. 00:36:18.815 [2024-07-26 16:41:38.462131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.462166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.815 qpair failed and we were unable to recover it. 00:36:18.815 [2024-07-26 16:41:38.462348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.462382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:18.815 qpair failed and we were unable to recover it. 00:36:18.815 [2024-07-26 16:41:38.462630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.815 [2024-07-26 16:41:38.462673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:36:18.815 [2024-07-26 16:41:38.462701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:36:18.815 [2024-07-26 16:41:38.462744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:36:18.815 [2024-07-26 16:41:38.462775] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:18.815 [2024-07-26 16:41:38.462801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:18.815 [2024-07-26 16:41:38.462828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:18.815 Unable to reset the controller. 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:19.380 Malloc0 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:19.380 [2024-07-26 16:41:38.980827] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.380 16:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:19.380 16:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.380 16:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:19.380 16:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.380 16:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:19.380 [2024-07-26 16:41:39.010288] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:19.380 16:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.380 16:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:19.380 16:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.380 16:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:19.380 16:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.380 16:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 822856 00:36:19.947 Controller properly reset. 00:36:25.208 Initializing NVMe Controllers 00:36:25.208 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:25.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:25.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:25.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:25.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:25.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:25.208 Initialization complete. Launching workers. 00:36:25.208 Starting thread on core 1 00:36:25.208 Starting thread on core 2 00:36:25.208 Starting thread on core 3 00:36:25.208 Starting thread on core 0 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:25.208 00:36:25.208 real 0m11.699s 00:36:25.208 user 0m35.181s 00:36:25.208 sys 0m7.569s 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:25.208 ************************************ 00:36:25.208 END TEST nvmf_target_disconnect_tc2 00:36:25.208 ************************************ 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:25.208 rmmod nvme_tcp 00:36:25.208 rmmod nvme_fabrics 00:36:25.208 rmmod nvme_keyring 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 823378 ']' 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 823378 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 823378 ']' 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 823378 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 823378 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 823378' 00:36:25.208 killing process with pid 823378 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 823378 00:36:25.208 16:41:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 823378 00:36:26.142 16:41:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:26.142 16:41:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:26.142 16:41:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:26.142 16:41:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:26.142 16:41:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:26.142 16:41:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.142 16:41:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:26.142 16:41:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:28.671 16:41:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:28.671 00:36:28.671 real 0m17.709s 00:36:28.671 user 1m3.573s 00:36:28.671 sys 0m10.232s 00:36:28.671 16:41:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:28.671 16:41:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:28.671 ************************************ 00:36:28.671 END TEST nvmf_target_disconnect 00:36:28.671 ************************************ 00:36:28.672 16:41:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:28.672 00:36:28.672 real 7m30.750s 00:36:28.672 user 19m23.480s 00:36:28.672 sys 1m31.371s 00:36:28.672 16:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:28.672 16:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.672 ************************************ 00:36:28.672 END TEST nvmf_host 00:36:28.672 ************************************ 00:36:28.672 00:36:28.672 real 29m2.682s 00:36:28.672 user 78m14.838s 00:36:28.672 sys 6m9.255s 00:36:28.672 16:41:47 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:28.672 16:41:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:28.672 ************************************ 00:36:28.672 END TEST nvmf_tcp 00:36:28.672 ************************************ 00:36:28.672 16:41:47 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:36:28.672 16:41:47 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:28.672 16:41:47 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:28.672 16:41:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:28.672 16:41:47 -- common/autotest_common.sh@10 -- # set +x 00:36:28.672 ************************************ 00:36:28.672 START TEST spdkcli_nvmf_tcp 00:36:28.672 ************************************ 00:36:28.672 16:41:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:28.672 * Looking for test storage... 00:36:28.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=824711 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 824711 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 824711 ']' 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:28.672 16:41:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:28.672 [2024-07-26 16:41:48.140515] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:28.672 [2024-07-26 16:41:48.140650] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824711 ] 00:36:28.672 EAL: No free 2048 kB hugepages reported on node 1 00:36:28.672 [2024-07-26 16:41:48.269053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:28.931 [2024-07-26 16:41:48.525117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:28.931 [2024-07-26 16:41:48.525117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:29.497 16:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:29.497 16:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:36:29.497 16:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:29.497 16:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:29.497 16:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:29.497 16:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:29.497 16:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:29.497 16:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:29.497 16:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:29.497 16:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:29.497 16:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:29.497 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:29.497 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:29.497 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:29.497 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:29.497 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:29.497 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:29.497 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:29.497 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:29.497 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:29.498 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:29.498 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:29.498 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:29.498 ' 00:36:32.778 [2024-07-26 16:41:51.829473] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:33.344 [2024-07-26 16:41:53.046992] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:35.870 [2024-07-26 16:41:55.302392] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:37.770 [2024-07-26 16:41:57.252770] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:39.145 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:39.145 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:39.145 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:39.145 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:39.145 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:39.145 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:39.145 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:39.145 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:39.145 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:39.145 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:39.145 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:39.145 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:39.145 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:39.145 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:39.145 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:39.145 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:39.145 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:39.146 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:39.146 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:39.146 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:39.146 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:39.146 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:39.146 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:39.146 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:39.146 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:39.146 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:39.146 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:39.146 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:39.146 16:41:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:39.146 16:41:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:39.146 16:41:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:39.146 16:41:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:39.146 16:41:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:39.146 16:41:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:39.146 16:41:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:39.146 16:41:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:39.742 16:41:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:39.742 16:41:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:39.742 16:41:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:39.742 16:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:39.742 16:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:39.742 16:41:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:39.742 16:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:39.742 16:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:39.742 16:41:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:39.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:39.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:39.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:39.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:39.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:39.742 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:39.742 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:39.742 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:39.742 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:39.742 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:39.742 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:39.742 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:39.742 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:39.742 ' 00:36:46.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:46.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:46.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:46.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:46.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:46.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:46.310 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:46.310 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:46.310 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:46.310 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:46.310 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:46.310 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:46.310 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:46.310 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:46.310 16:42:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:46.310 16:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:46.310 16:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:46.310 16:42:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 824711 00:36:46.310 16:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 824711 ']' 00:36:46.310 16:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 824711 00:36:46.310 16:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:36:46.310 16:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:46.310 16:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 824711 00:36:46.310 16:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:46.310 16:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:46.310 16:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 824711' 00:36:46.310 killing process with pid 824711 00:36:46.310 16:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 824711 00:36:46.310 16:42:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 824711 00:36:46.878 16:42:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:46.878 16:42:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:46.878 16:42:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 824711 ']' 00:36:46.878 16:42:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 824711 00:36:46.878 16:42:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 824711 ']' 00:36:46.878 16:42:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 824711 00:36:46.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (824711) - No such process 00:36:46.878 16:42:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 824711 is not found' 00:36:46.878 Process with pid 824711 is not found 00:36:46.878 16:42:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:46.878 16:42:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:46.878 16:42:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:46.878 00:36:46.878 real 0m18.496s 00:36:46.878 user 0m38.180s 00:36:46.878 sys 0m1.037s 00:36:46.878 16:42:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:46.878 16:42:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:46.878 ************************************ 00:36:46.878 END TEST spdkcli_nvmf_tcp 00:36:46.878 ************************************ 00:36:46.878 16:42:06 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:46.878 16:42:06 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:46.878 16:42:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:46.878 16:42:06 -- common/autotest_common.sh@10 -- # set +x 00:36:46.878 ************************************ 00:36:46.878 START TEST nvmf_identify_passthru 00:36:46.878 ************************************ 00:36:46.878 16:42:06 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:46.878 * Looking for test storage... 00:36:46.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:46.878 16:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:46.878 16:42:06 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:46.878 16:42:06 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:46.878 16:42:06 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:46.878 16:42:06 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.878 16:42:06 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.878 16:42:06 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.878 16:42:06 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:46.878 16:42:06 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:46.878 16:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:46.878 16:42:06 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:46.878 16:42:06 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:46.878 16:42:06 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:46.878 16:42:06 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.878 16:42:06 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.878 16:42:06 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.878 16:42:06 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:46.878 16:42:06 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.878 16:42:06 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:46.878 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:46.879 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:46.879 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:46.879 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:46.879 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:46.879 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.879 16:42:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:46.879 16:42:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:46.879 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:46.879 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:46.879 16:42:06 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:36:46.879 16:42:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:48.781 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:48.781 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:48.781 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:48.781 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:48.781 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:48.782 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:48.782 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:48.782 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:48.782 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:48.782 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:48.782 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:48.782 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:48.782 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:48.782 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:48.782 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:49.040 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:49.040 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:49.040 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:49.040 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:49.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:49.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:36:49.040 00:36:49.040 --- 10.0.0.2 ping statistics --- 00:36:49.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.040 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:36:49.040 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:49.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:49.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:36:49.040 00:36:49.040 --- 10.0.0.1 ping statistics --- 00:36:49.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.040 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:36:49.040 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:49.040 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:36:49.041 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:49.041 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:49.041 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:49.041 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:49.041 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:49.041 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:49.041 16:42:08 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:49.041 16:42:08 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:49.041 16:42:08 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:49.041 16:42:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:49.041 16:42:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:49.041 16:42:08 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:36:49.041 16:42:08 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:36:49.041 16:42:08 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:36:49.041 16:42:08 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:36:49.041 16:42:08 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:36:49.041 16:42:08 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:36:49.041 16:42:08 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:49.041 16:42:08 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:49.041 16:42:08 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:36:49.041 16:42:08 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:36:49.041 16:42:08 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:36:49.041 16:42:08 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:36:49.041 16:42:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:36:49.041 16:42:08 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:36:49.041 16:42:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:36:49.041 16:42:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:49.041 16:42:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:49.300 EAL: No free 2048 kB hugepages reported on node 1 00:36:53.488 16:42:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:36:53.488 16:42:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:36:53.488 16:42:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:53.488 16:42:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:53.488 EAL: No free 2048 kB hugepages reported on node 1 00:36:57.675 16:42:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:57.675 16:42:17 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:57.675 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:57.675 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:57.675 16:42:17 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:57.675 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:57.675 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:57.675 16:42:17 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=830204 00:36:57.675 16:42:17 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:57.675 16:42:17 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:57.675 16:42:17 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 830204 00:36:57.675 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 830204 ']' 00:36:57.675 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:57.675 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:57.675 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:57.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:57.675 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:57.675 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:57.934 [2024-07-26 16:42:17.531838] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:57.934 [2024-07-26 16:42:17.532003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:57.934 EAL: No free 2048 kB hugepages reported on node 1 00:36:57.934 [2024-07-26 16:42:17.686755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:58.193 [2024-07-26 16:42:17.950338] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:58.193 [2024-07-26 16:42:17.950414] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:58.193 [2024-07-26 16:42:17.950441] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:58.193 [2024-07-26 16:42:17.950462] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:58.193 [2024-07-26 16:42:17.950484] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:58.193 [2024-07-26 16:42:17.950610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:58.193 [2024-07-26 16:42:17.950682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:58.193 [2024-07-26 16:42:17.950729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:58.193 [2024-07-26 16:42:17.950741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:58.758 16:42:18 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:58.758 16:42:18 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:36:58.758 16:42:18 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:58.758 16:42:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.758 16:42:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:58.758 INFO: Log level set to 20 00:36:58.758 INFO: Requests: 00:36:58.758 { 00:36:58.758 "jsonrpc": "2.0", 00:36:58.758 "method": "nvmf_set_config", 00:36:58.758 "id": 1, 00:36:58.758 "params": { 00:36:58.758 "admin_cmd_passthru": { 00:36:58.758 "identify_ctrlr": true 00:36:58.758 } 00:36:58.758 } 00:36:58.758 } 00:36:58.758 00:36:58.758 INFO: response: 00:36:58.758 { 00:36:58.758 "jsonrpc": "2.0", 00:36:58.758 "id": 1, 00:36:58.758 "result": true 00:36:58.758 } 00:36:58.758 00:36:58.758 16:42:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.758 16:42:18 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:58.758 16:42:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.758 16:42:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:58.758 INFO: Setting log level to 20 00:36:58.758 INFO: Setting log level to 20 00:36:58.758 INFO: Log level set to 20 00:36:58.758 INFO: Log level set to 20 00:36:58.758 INFO: Requests: 00:36:58.758 { 00:36:58.758 "jsonrpc": "2.0", 00:36:58.758 "method": "framework_start_init", 00:36:58.758 "id": 1 00:36:58.758 } 00:36:58.758 00:36:58.758 INFO: Requests: 00:36:58.758 { 00:36:58.758 "jsonrpc": "2.0", 00:36:58.758 "method": "framework_start_init", 00:36:58.758 "id": 1 00:36:58.758 } 00:36:58.758 00:36:59.325 [2024-07-26 16:42:18.811425] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:59.325 INFO: response: 00:36:59.325 { 00:36:59.325 "jsonrpc": "2.0", 00:36:59.325 "id": 1, 00:36:59.325 "result": true 00:36:59.325 } 00:36:59.325 00:36:59.325 INFO: response: 00:36:59.325 { 00:36:59.325 "jsonrpc": "2.0", 00:36:59.325 "id": 1, 00:36:59.325 "result": true 00:36:59.325 } 00:36:59.325 00:36:59.325 16:42:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.325 16:42:18 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:59.325 16:42:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.325 16:42:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:59.325 INFO: Setting log level to 40 00:36:59.325 INFO: Setting log level to 40 00:36:59.325 INFO: Setting log level to 40 00:36:59.325 [2024-07-26 16:42:18.824161] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:59.325 16:42:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.325 16:42:18 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:59.325 16:42:18 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:59.325 16:42:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:59.325 16:42:18 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:36:59.325 16:42:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.325 16:42:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:02.606 Nvme0n1 00:37:02.606 16:42:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.606 16:42:21 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:02.606 16:42:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.606 16:42:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:02.606 16:42:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.606 16:42:21 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:02.606 16:42:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.606 16:42:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:02.606 16:42:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.606 16:42:21 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:02.606 16:42:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.606 16:42:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:02.606 [2024-07-26 16:42:21.767378] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.606 16:42:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.606 16:42:21 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:02.606 16:42:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.606 16:42:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:02.606 [ 00:37:02.606 { 00:37:02.606 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:02.606 "subtype": "Discovery", 00:37:02.606 "listen_addresses": [], 00:37:02.606 "allow_any_host": true, 00:37:02.606 "hosts": [] 00:37:02.606 }, 00:37:02.606 { 00:37:02.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:02.606 "subtype": "NVMe", 00:37:02.606 "listen_addresses": [ 00:37:02.606 { 00:37:02.606 "trtype": "TCP", 00:37:02.606 "adrfam": "IPv4", 00:37:02.606 "traddr": "10.0.0.2", 00:37:02.606 "trsvcid": "4420" 00:37:02.606 } 00:37:02.606 ], 00:37:02.606 "allow_any_host": true, 00:37:02.606 "hosts": [], 00:37:02.606 "serial_number": "SPDK00000000000001", 00:37:02.606 "model_number": "SPDK bdev Controller", 00:37:02.606 "max_namespaces": 1, 00:37:02.606 "min_cntlid": 1, 00:37:02.606 "max_cntlid": 65519, 00:37:02.606 "namespaces": [ 00:37:02.606 { 00:37:02.606 "nsid": 1, 00:37:02.606 "bdev_name": "Nvme0n1", 00:37:02.606 "name": "Nvme0n1", 00:37:02.606 "nguid": "ABCB96278D474469A8DB2B2BFC9CDAD0", 00:37:02.606 "uuid": "abcb9627-8d47-4469-a8db-2b2bfc9cdad0" 00:37:02.606 } 00:37:02.606 ] 00:37:02.606 } 00:37:02.606 ] 00:37:02.606 16:42:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.606 16:42:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:02.606 16:42:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:02.606 16:42:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:02.606 EAL: No free 2048 kB hugepages reported on node 1 00:37:02.606 16:42:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:37:02.606 16:42:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:02.606 16:42:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:02.607 16:42:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:02.607 EAL: No free 2048 kB hugepages reported on node 1 00:37:02.865 16:42:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:37:02.865 16:42:22 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:37:02.865 16:42:22 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:37:02.865 16:42:22 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:02.865 16:42:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.865 16:42:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:02.865 16:42:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.865 16:42:22 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:02.865 16:42:22 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:02.865 16:42:22 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:02.865 16:42:22 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:37:02.865 16:42:22 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:02.865 16:42:22 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:37:02.865 16:42:22 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:02.865 16:42:22 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:02.865 rmmod nvme_tcp 00:37:02.865 rmmod nvme_fabrics 00:37:02.865 rmmod nvme_keyring 00:37:02.865 16:42:22 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:02.865 16:42:22 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:37:02.865 16:42:22 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:37:02.865 16:42:22 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 830204 ']' 00:37:02.865 16:42:22 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 830204 00:37:02.865 16:42:22 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 830204 ']' 00:37:02.865 16:42:22 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 830204 00:37:02.865 16:42:22 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:37:02.865 16:42:22 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:02.865 16:42:22 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 830204 00:37:03.122 16:42:22 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:03.122 16:42:22 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:03.122 16:42:22 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 830204' 00:37:03.122 killing process with pid 830204 00:37:03.122 16:42:22 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 830204 00:37:03.122 16:42:22 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 830204 00:37:05.645 16:42:25 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:05.645 16:42:25 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:05.645 16:42:25 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:05.645 16:42:25 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:05.645 16:42:25 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:05.645 16:42:25 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:05.645 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:05.645 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.577 16:42:27 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:07.577 00:37:07.577 real 0m20.731s 00:37:07.577 user 0m34.232s 00:37:07.577 sys 0m2.803s 00:37:07.577 16:42:27 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:07.577 16:42:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:07.577 ************************************ 00:37:07.577 END TEST nvmf_identify_passthru 00:37:07.577 ************************************ 00:37:07.577 16:42:27 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:07.577 16:42:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:07.577 16:42:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:07.577 16:42:27 -- common/autotest_common.sh@10 -- # set +x 00:37:07.577 ************************************ 00:37:07.577 START TEST nvmf_dif 00:37:07.577 ************************************ 00:37:07.577 16:42:27 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:07.836 * Looking for test storage... 00:37:07.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:07.836 16:42:27 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:07.836 16:42:27 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:07.836 16:42:27 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:07.836 16:42:27 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:07.836 16:42:27 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.836 16:42:27 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.836 16:42:27 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.836 16:42:27 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:07.836 16:42:27 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:07.836 16:42:27 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:07.836 16:42:27 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:07.836 16:42:27 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:07.836 16:42:27 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:07.836 16:42:27 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:07.836 16:42:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:07.836 16:42:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:07.836 16:42:27 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:37:07.836 16:42:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:09.737 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:09.737 16:42:29 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:09.738 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:09.738 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:09.738 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:09.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:09.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:37:09.738 00:37:09.738 --- 10.0.0.2 ping statistics --- 00:37:09.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:09.738 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:09.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:09.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:37:09.738 00:37:09.738 --- 10.0.0.1 ping statistics --- 00:37:09.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:09.738 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:37:09.738 16:42:29 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:11.113 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:37:11.113 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:11.113 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:37:11.113 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:37:11.113 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:37:11.113 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:37:11.113 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:37:11.113 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:37:11.113 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:37:11.113 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:37:11.113 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:37:11.113 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:37:11.113 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:37:11.113 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:37:11.113 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:37:11.113 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:37:11.113 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:37:11.113 16:42:30 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:11.113 16:42:30 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:11.113 16:42:30 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:11.113 16:42:30 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:11.113 16:42:30 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:11.113 16:42:30 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:11.113 16:42:30 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:11.113 16:42:30 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:11.113 16:42:30 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:11.113 16:42:30 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:11.113 16:42:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:11.113 16:42:30 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=833637 00:37:11.113 16:42:30 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:11.113 16:42:30 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 833637 00:37:11.113 16:42:30 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 833637 ']' 00:37:11.113 16:42:30 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.113 16:42:30 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:11.113 16:42:30 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.113 16:42:30 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:11.113 16:42:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:11.113 [2024-07-26 16:42:30.801202] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:37:11.113 [2024-07-26 16:42:30.801349] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:11.372 EAL: No free 2048 kB hugepages reported on node 1 00:37:11.372 [2024-07-26 16:42:30.941257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.630 [2024-07-26 16:42:31.198259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:11.630 [2024-07-26 16:42:31.198342] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:11.630 [2024-07-26 16:42:31.198371] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:11.630 [2024-07-26 16:42:31.198396] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:11.630 [2024-07-26 16:42:31.198420] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:11.630 [2024-07-26 16:42:31.198475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:12.195 16:42:31 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:12.195 16:42:31 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:37:12.195 16:42:31 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:12.195 16:42:31 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:12.195 16:42:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:12.195 16:42:31 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:12.195 16:42:31 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:12.195 16:42:31 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:12.195 16:42:31 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.195 16:42:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:12.195 [2024-07-26 16:42:31.732975] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:12.195 16:42:31 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.195 16:42:31 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:12.195 16:42:31 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:12.196 16:42:31 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:12.196 16:42:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:12.196 ************************************ 00:37:12.196 START TEST fio_dif_1_default 00:37:12.196 ************************************ 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:12.196 bdev_null0 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:12.196 [2024-07-26 16:42:31.789316] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:12.196 { 00:37:12.196 "params": { 00:37:12.196 "name": "Nvme$subsystem", 00:37:12.196 "trtype": "$TEST_TRANSPORT", 00:37:12.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:12.196 "adrfam": "ipv4", 00:37:12.196 "trsvcid": "$NVMF_PORT", 00:37:12.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:12.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:12.196 "hdgst": ${hdgst:-false}, 00:37:12.196 "ddgst": ${ddgst:-false} 00:37:12.196 }, 00:37:12.196 "method": "bdev_nvme_attach_controller" 00:37:12.196 } 00:37:12.196 EOF 00:37:12.196 )") 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:12.196 "params": { 00:37:12.196 "name": "Nvme0", 00:37:12.196 "trtype": "tcp", 00:37:12.196 "traddr": "10.0.0.2", 00:37:12.196 "adrfam": "ipv4", 00:37:12.196 "trsvcid": "4420", 00:37:12.196 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:12.196 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:12.196 "hdgst": false, 00:37:12.196 "ddgst": false 00:37:12.196 }, 00:37:12.196 "method": "bdev_nvme_attach_controller" 00:37:12.196 }' 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:12.196 16:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:12.453 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:12.453 fio-3.35 00:37:12.453 Starting 1 thread 00:37:12.453 EAL: No free 2048 kB hugepages reported on node 1 00:37:24.645 00:37:24.645 filename0: (groupid=0, jobs=1): err= 0: pid=833986: Fri Jul 26 16:42:43 2024 00:37:24.645 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10021msec) 00:37:24.645 slat (nsec): min=6051, max=91101, avg=14610.32, stdev=5926.87 00:37:24.645 clat (usec): min=896, max=45313, avg=21551.09, stdev=20457.41 00:37:24.645 lat (usec): min=922, max=45334, avg=21565.70, stdev=20456.85 00:37:24.645 clat percentiles (usec): 00:37:24.645 | 1.00th=[ 922], 5.00th=[ 938], 10.00th=[ 955], 20.00th=[ 996], 00:37:24.645 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[41681], 60.00th=[41681], 00:37:24.645 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:37:24.645 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:37:24.645 | 99.99th=[45351] 00:37:24.645 bw ( KiB/s): min= 672, max= 768, per=99.89%, avg=740.80, stdev=34.86, samples=20 00:37:24.645 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:37:24.645 lat (usec) : 1000=22.25% 00:37:24.645 lat (msec) : 2=27.53%, 50=50.22% 00:37:24.645 cpu : usr=91.66%, sys=7.79%, ctx=31, majf=0, minf=1636 00:37:24.645 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:24.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.645 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:24.645 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:24.645 00:37:24.645 Run status group 0 (all jobs): 00:37:24.645 READ: bw=741KiB/s (759kB/s), 741KiB/s-741KiB/s (759kB/s-759kB/s), io=7424KiB (7602kB), run=10021-10021msec 00:37:24.645 ----------------------------------------------------- 00:37:24.645 Suppressions used: 00:37:24.645 count bytes template 00:37:24.645 1 8 /usr/src/fio/parse.c 00:37:24.645 1 8 libtcmalloc_minimal.so 00:37:24.645 1 904 libcrypto.so 00:37:24.645 ----------------------------------------------------- 00:37:24.645 00:37:24.645 16:42:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:24.645 16:42:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:24.645 16:42:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:24.645 16:42:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:24.645 16:42:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:24.645 16:42:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:24.645 16:42:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.645 16:42:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:24.645 16:42:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.646 00:37:24.646 real 0m12.288s 00:37:24.646 user 0m11.342s 00:37:24.646 sys 0m1.190s 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:24.646 ************************************ 00:37:24.646 END TEST fio_dif_1_default 00:37:24.646 ************************************ 00:37:24.646 16:42:44 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:24.646 16:42:44 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:24.646 16:42:44 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:24.646 16:42:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:24.646 ************************************ 00:37:24.646 START TEST fio_dif_1_multi_subsystems 00:37:24.646 ************************************ 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:24.646 bdev_null0 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:24.646 [2024-07-26 16:42:44.124855] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:24.646 bdev_null1 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:24.646 { 00:37:24.646 "params": { 00:37:24.646 "name": "Nvme$subsystem", 00:37:24.646 "trtype": "$TEST_TRANSPORT", 00:37:24.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:24.646 "adrfam": "ipv4", 00:37:24.646 "trsvcid": "$NVMF_PORT", 00:37:24.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:24.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:24.646 "hdgst": ${hdgst:-false}, 00:37:24.646 "ddgst": ${ddgst:-false} 00:37:24.646 }, 00:37:24.646 "method": "bdev_nvme_attach_controller" 00:37:24.646 } 00:37:24.646 EOF 00:37:24.646 )") 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:24.646 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:24.647 { 00:37:24.647 "params": { 00:37:24.647 "name": "Nvme$subsystem", 00:37:24.647 "trtype": "$TEST_TRANSPORT", 00:37:24.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:24.647 "adrfam": "ipv4", 00:37:24.647 "trsvcid": "$NVMF_PORT", 00:37:24.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:24.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:24.647 "hdgst": ${hdgst:-false}, 00:37:24.647 "ddgst": ${ddgst:-false} 00:37:24.647 }, 00:37:24.647 "method": "bdev_nvme_attach_controller" 00:37:24.647 } 00:37:24.647 EOF 00:37:24.647 )") 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:24.647 "params": { 00:37:24.647 "name": "Nvme0", 00:37:24.647 "trtype": "tcp", 00:37:24.647 "traddr": "10.0.0.2", 00:37:24.647 "adrfam": "ipv4", 00:37:24.647 "trsvcid": "4420", 00:37:24.647 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:24.647 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:24.647 "hdgst": false, 00:37:24.647 "ddgst": false 00:37:24.647 }, 00:37:24.647 "method": "bdev_nvme_attach_controller" 00:37:24.647 },{ 00:37:24.647 "params": { 00:37:24.647 "name": "Nvme1", 00:37:24.647 "trtype": "tcp", 00:37:24.647 "traddr": "10.0.0.2", 00:37:24.647 "adrfam": "ipv4", 00:37:24.647 "trsvcid": "4420", 00:37:24.647 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:24.647 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:24.647 "hdgst": false, 00:37:24.647 "ddgst": false 00:37:24.647 }, 00:37:24.647 "method": "bdev_nvme_attach_controller" 00:37:24.647 }' 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:24.647 16:42:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:24.905 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:24.905 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:24.905 fio-3.35 00:37:24.905 Starting 2 threads 00:37:24.905 EAL: No free 2048 kB hugepages reported on node 1 00:37:37.099 00:37:37.099 filename0: (groupid=0, jobs=1): err= 0: pid=835503: Fri Jul 26 16:42:55 2024 00:37:37.099 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10041msec) 00:37:37.099 slat (nsec): min=9573, max=45148, avg=16433.38, stdev=8424.83 00:37:37.099 clat (usec): min=41786, max=42983, avg=41959.77, stdev=105.29 00:37:37.099 lat (usec): min=41810, max=43000, avg=41976.20, stdev=106.19 00:37:37.099 clat percentiles (usec): 00:37:37.099 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:37:37.099 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:37:37.099 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:37.099 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:37:37.099 | 99.99th=[42730] 00:37:37.099 bw ( KiB/s): min= 352, max= 384, per=33.87%, avg=380.80, stdev= 9.85, samples=20 00:37:37.099 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:37:37.099 lat (msec) : 50=100.00% 00:37:37.099 cpu : usr=94.23%, sys=5.27%, ctx=15, majf=0, minf=1636 00:37:37.099 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:37.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.099 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.099 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:37.099 filename1: (groupid=0, jobs=1): err= 0: pid=835504: Fri Jul 26 16:42:55 2024 00:37:37.099 read: IOPS=185, BW=743KiB/s (760kB/s)(7440KiB/10019msec) 00:37:37.099 slat (nsec): min=9453, max=57202, avg=14630.77, stdev=6949.62 00:37:37.099 clat (usec): min=890, max=43404, avg=21499.54, stdev=20422.96 00:37:37.099 lat (usec): min=914, max=43427, avg=21514.17, stdev=20421.45 00:37:37.099 clat percentiles (usec): 00:37:37.099 | 1.00th=[ 963], 5.00th=[ 996], 10.00th=[ 1004], 20.00th=[ 1020], 00:37:37.099 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[41157], 60.00th=[41681], 00:37:37.099 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:37:37.099 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:37:37.099 | 99.99th=[43254] 00:37:37.099 bw ( KiB/s): min= 704, max= 768, per=66.14%, avg=742.45, stdev=32.11, samples=20 00:37:37.099 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:37:37.099 lat (usec) : 1000=6.61% 00:37:37.099 lat (msec) : 2=43.28%, 50=50.11% 00:37:37.099 cpu : usr=93.99%, sys=5.52%, ctx=14, majf=0, minf=1637 00:37:37.099 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:37.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.099 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.099 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:37.099 00:37:37.099 Run status group 0 (all jobs): 00:37:37.099 READ: bw=1122KiB/s (1149kB/s), 381KiB/s-743KiB/s (390kB/s-760kB/s), io=11.0MiB (11.5MB), run=10019-10041msec 00:37:37.099 ----------------------------------------------------- 00:37:37.099 Suppressions used: 00:37:37.099 count bytes template 00:37:37.099 2 16 /usr/src/fio/parse.c 00:37:37.099 1 8 libtcmalloc_minimal.so 00:37:37.099 1 904 libcrypto.so 00:37:37.099 ----------------------------------------------------- 00:37:37.099 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:37.099 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:37.100 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:37.100 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.100 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:37.100 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.100 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:37.100 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.100 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:37.100 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.100 00:37:37.100 real 0m12.543s 00:37:37.100 user 0m21.308s 00:37:37.100 sys 0m1.523s 00:37:37.100 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:37.100 16:42:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:37.100 ************************************ 00:37:37.100 END TEST fio_dif_1_multi_subsystems 00:37:37.100 ************************************ 00:37:37.100 16:42:56 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:37.100 16:42:56 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:37.100 16:42:56 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:37.100 16:42:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:37.100 ************************************ 00:37:37.100 START TEST fio_dif_rand_params 00:37:37.100 ************************************ 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.100 bdev_null0 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.100 [2024-07-26 16:42:56.717702] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:37.100 { 00:37:37.100 "params": { 00:37:37.100 "name": "Nvme$subsystem", 00:37:37.100 "trtype": "$TEST_TRANSPORT", 00:37:37.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:37.100 "adrfam": "ipv4", 00:37:37.100 "trsvcid": "$NVMF_PORT", 00:37:37.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:37.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:37.100 "hdgst": ${hdgst:-false}, 00:37:37.100 "ddgst": ${ddgst:-false} 00:37:37.100 }, 00:37:37.100 "method": "bdev_nvme_attach_controller" 00:37:37.100 } 00:37:37.100 EOF 00:37:37.100 )") 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:37.100 "params": { 00:37:37.100 "name": "Nvme0", 00:37:37.100 "trtype": "tcp", 00:37:37.100 "traddr": "10.0.0.2", 00:37:37.100 "adrfam": "ipv4", 00:37:37.100 "trsvcid": "4420", 00:37:37.100 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:37.100 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:37.100 "hdgst": false, 00:37:37.100 "ddgst": false 00:37:37.100 }, 00:37:37.100 "method": "bdev_nvme_attach_controller" 00:37:37.100 }' 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:37.100 16:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:37.359 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:37.359 ... 00:37:37.359 fio-3.35 00:37:37.359 Starting 3 threads 00:37:37.359 EAL: No free 2048 kB hugepages reported on node 1 00:37:43.918 00:37:43.918 filename0: (groupid=0, jobs=1): err= 0: pid=837023: Fri Jul 26 16:43:03 2024 00:37:43.918 read: IOPS=177, BW=22.2MiB/s (23.3MB/s)(112MiB/5044msec) 00:37:43.918 slat (nsec): min=7572, max=46143, avg=18227.88, stdev=3780.57 00:37:43.918 clat (usec): min=5928, max=58815, avg=16838.03, stdev=12793.09 00:37:43.918 lat (usec): min=5946, max=58832, avg=16856.26, stdev=12792.97 00:37:43.918 clat percentiles (usec): 00:37:43.918 | 1.00th=[ 6783], 5.00th=[ 7177], 10.00th=[ 7767], 20.00th=[10028], 00:37:43.918 | 30.00th=[10945], 40.00th=[11863], 50.00th=[13042], 60.00th=[14484], 00:37:43.918 | 70.00th=[15664], 80.00th=[16909], 90.00th=[48497], 95.00th=[53740], 00:37:43.918 | 99.00th=[56886], 99.50th=[57934], 99.90th=[58983], 99.95th=[58983], 00:37:43.918 | 99.99th=[58983] 00:37:43.918 bw ( KiB/s): min=15616, max=28160, per=35.53%, avg=22839.20, stdev=4010.15, samples=10 00:37:43.918 iops : min= 122, max= 220, avg=178.40, stdev=31.35, samples=10 00:37:43.918 lat (msec) : 10=19.89%, 20=68.38%, 50=2.68%, 100=9.05% 00:37:43.918 cpu : usr=91.26%, sys=8.21%, ctx=9, majf=0, minf=1636 00:37:43.918 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:43.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:43.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:43.918 issued rwts: total=895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:43.918 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:43.918 filename0: (groupid=0, jobs=1): err= 0: pid=837024: Fri Jul 26 16:43:03 2024 00:37:43.918 read: IOPS=172, BW=21.5MiB/s (22.6MB/s)(108MiB/5018msec) 00:37:43.918 slat (nsec): min=7813, max=50599, avg=18101.68, stdev=4150.05 00:37:43.919 clat (usec): min=5893, max=58676, avg=17392.94, stdev=14072.12 00:37:43.919 lat (usec): min=5910, max=58694, avg=17411.04, stdev=14072.25 00:37:43.919 clat percentiles (usec): 00:37:43.919 | 1.00th=[ 6849], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 9372], 00:37:43.919 | 30.00th=[10421], 40.00th=[11469], 50.00th=[12780], 60.00th=[14222], 00:37:43.919 | 70.00th=[15533], 80.00th=[16909], 90.00th=[51119], 95.00th=[54264], 00:37:43.919 | 99.00th=[56886], 99.50th=[57410], 99.90th=[58459], 99.95th=[58459], 00:37:43.919 | 99.99th=[58459] 00:37:43.919 bw ( KiB/s): min=14848, max=27904, per=34.29%, avg=22041.60, stdev=4142.76, samples=10 00:37:43.919 iops : min= 116, max= 218, avg=172.20, stdev=32.37, samples=10 00:37:43.919 lat (msec) : 10=26.85%, 20=59.84%, 50=1.62%, 100=11.69% 00:37:43.919 cpu : usr=90.53%, sys=8.93%, ctx=9, majf=0, minf=1637 00:37:43.919 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:43.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:43.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:43.919 issued rwts: total=864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:43.919 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:43.919 filename0: (groupid=0, jobs=1): err= 0: pid=837025: Fri Jul 26 16:43:03 2024 00:37:43.919 read: IOPS=154, BW=19.3MiB/s (20.3MB/s)(96.8MiB/5003msec) 00:37:43.919 slat (nsec): min=7783, max=47167, avg=18690.76, stdev=4454.02 00:37:43.919 clat (usec): min=6258, max=92039, avg=19365.00, stdev=15115.19 00:37:43.919 lat (usec): min=6276, max=92057, avg=19383.69, stdev=15115.05 00:37:43.919 clat percentiles (usec): 00:37:43.919 | 1.00th=[ 6456], 5.00th=[ 7046], 10.00th=[ 8029], 20.00th=[10552], 00:37:43.919 | 30.00th=[11731], 40.00th=[13304], 50.00th=[14353], 60.00th=[15533], 00:37:43.919 | 70.00th=[16909], 80.00th=[19006], 90.00th=[53216], 95.00th=[55837], 00:37:43.919 | 99.00th=[57410], 99.50th=[58459], 99.90th=[91751], 99.95th=[91751], 00:37:43.919 | 99.99th=[91751] 00:37:43.919 bw ( KiB/s): min=13824, max=24576, per=30.78%, avg=19788.80, stdev=4006.22, samples=10 00:37:43.919 iops : min= 108, max= 192, avg=154.60, stdev=31.30, samples=10 00:37:43.919 lat (msec) : 10=16.93%, 20=65.89%, 50=3.23%, 100=13.95% 00:37:43.919 cpu : usr=90.88%, sys=8.58%, ctx=9, majf=0, minf=1635 00:37:43.919 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:43.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:43.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:43.919 issued rwts: total=774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:43.919 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:43.919 00:37:43.919 Run status group 0 (all jobs): 00:37:43.919 READ: bw=62.8MiB/s (65.8MB/s), 19.3MiB/s-22.2MiB/s (20.3MB/s-23.3MB/s), io=317MiB (332MB), run=5003-5044msec 00:37:44.511 ----------------------------------------------------- 00:37:44.511 Suppressions used: 00:37:44.511 count bytes template 00:37:44.511 5 44 /usr/src/fio/parse.c 00:37:44.511 1 8 libtcmalloc_minimal.so 00:37:44.511 1 904 libcrypto.so 00:37:44.511 ----------------------------------------------------- 00:37:44.511 00:37:44.511 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:44.511 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:44.511 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:44.511 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:44.511 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.512 bdev_null0 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.512 [2024-07-26 16:43:04.107579] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.512 bdev_null1 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.512 bdev_null2 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:44.512 { 00:37:44.512 "params": { 00:37:44.512 "name": "Nvme$subsystem", 00:37:44.512 "trtype": "$TEST_TRANSPORT", 00:37:44.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.512 "adrfam": "ipv4", 00:37:44.512 "trsvcid": "$NVMF_PORT", 00:37:44.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.512 "hdgst": ${hdgst:-false}, 00:37:44.512 "ddgst": ${ddgst:-false} 00:37:44.512 }, 00:37:44.512 "method": "bdev_nvme_attach_controller" 00:37:44.512 } 00:37:44.512 EOF 00:37:44.512 )") 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:44.512 16:43:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:44.512 { 00:37:44.512 "params": { 00:37:44.512 "name": "Nvme$subsystem", 00:37:44.512 "trtype": "$TEST_TRANSPORT", 00:37:44.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.512 "adrfam": "ipv4", 00:37:44.512 "trsvcid": "$NVMF_PORT", 00:37:44.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.513 "hdgst": ${hdgst:-false}, 00:37:44.513 "ddgst": ${ddgst:-false} 00:37:44.513 }, 00:37:44.513 "method": "bdev_nvme_attach_controller" 00:37:44.513 } 00:37:44.513 EOF 00:37:44.513 )") 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:44.513 { 00:37:44.513 "params": { 00:37:44.513 "name": "Nvme$subsystem", 00:37:44.513 "trtype": "$TEST_TRANSPORT", 00:37:44.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.513 "adrfam": "ipv4", 00:37:44.513 "trsvcid": "$NVMF_PORT", 00:37:44.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.513 "hdgst": ${hdgst:-false}, 00:37:44.513 "ddgst": ${ddgst:-false} 00:37:44.513 }, 00:37:44.513 "method": "bdev_nvme_attach_controller" 00:37:44.513 } 00:37:44.513 EOF 00:37:44.513 )") 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:44.513 "params": { 00:37:44.513 "name": "Nvme0", 00:37:44.513 "trtype": "tcp", 00:37:44.513 "traddr": "10.0.0.2", 00:37:44.513 "adrfam": "ipv4", 00:37:44.513 "trsvcid": "4420", 00:37:44.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:44.513 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:44.513 "hdgst": false, 00:37:44.513 "ddgst": false 00:37:44.513 }, 00:37:44.513 "method": "bdev_nvme_attach_controller" 00:37:44.513 },{ 00:37:44.513 "params": { 00:37:44.513 "name": "Nvme1", 00:37:44.513 "trtype": "tcp", 00:37:44.513 "traddr": "10.0.0.2", 00:37:44.513 "adrfam": "ipv4", 00:37:44.513 "trsvcid": "4420", 00:37:44.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:44.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:44.513 "hdgst": false, 00:37:44.513 "ddgst": false 00:37:44.513 }, 00:37:44.513 "method": "bdev_nvme_attach_controller" 00:37:44.513 },{ 00:37:44.513 "params": { 00:37:44.513 "name": "Nvme2", 00:37:44.513 "trtype": "tcp", 00:37:44.513 "traddr": "10.0.0.2", 00:37:44.513 "adrfam": "ipv4", 00:37:44.513 "trsvcid": "4420", 00:37:44.513 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:44.513 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:44.513 "hdgst": false, 00:37:44.513 "ddgst": false 00:37:44.513 }, 00:37:44.513 "method": "bdev_nvme_attach_controller" 00:37:44.513 }' 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:44.513 16:43:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:44.770 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:44.770 ... 00:37:44.770 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:44.770 ... 00:37:44.770 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:44.770 ... 00:37:44.770 fio-3.35 00:37:44.770 Starting 24 threads 00:37:45.027 EAL: No free 2048 kB hugepages reported on node 1 00:37:57.224 00:37:57.224 filename0: (groupid=0, jobs=1): err= 0: pid=838004: Fri Jul 26 16:43:15 2024 00:37:57.224 read: IOPS=354, BW=1416KiB/s (1450kB/s)(13.9MiB/10031msec) 00:37:57.224 slat (nsec): min=13042, max=97361, avg=41426.93, stdev=14376.41 00:37:57.224 clat (usec): min=32492, max=67949, avg=44827.84, stdev=2057.53 00:37:57.224 lat (usec): min=32535, max=67982, avg=44869.27, stdev=2054.74 00:37:57.224 clat percentiles (usec): 00:37:57.224 | 1.00th=[41681], 5.00th=[43254], 10.00th=[43254], 20.00th=[43779], 00:37:57.224 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:37:57.224 | 70.00th=[45351], 80.00th=[45351], 90.00th=[46400], 95.00th=[46924], 00:37:57.224 | 99.00th=[47973], 99.50th=[48497], 99.90th=[67634], 99.95th=[67634], 00:37:57.224 | 99.99th=[67634] 00:37:57.224 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1414.74, stdev=51.80, samples=19 00:37:57.224 iops : min= 320, max= 384, avg=353.68, stdev=12.95, samples=19 00:37:57.224 lat (msec) : 50=99.55%, 100=0.45% 00:37:57.224 cpu : usr=97.91%, sys=1.60%, ctx=19, majf=0, minf=1634 00:37:57.224 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:57.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.224 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.224 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.224 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.224 filename0: (groupid=0, jobs=1): err= 0: pid=838005: Fri Jul 26 16:43:15 2024 00:37:57.224 read: IOPS=363, BW=1452KiB/s (1487kB/s)(14.2MiB/10011msec) 00:37:57.224 slat (usec): min=12, max=142, avg=56.53, stdev=13.61 00:37:57.224 clat (usec): min=12797, max=88066, avg=43633.43, stdev=5945.60 00:37:57.224 lat (usec): min=12868, max=88095, avg=43689.97, stdev=5943.88 00:37:57.224 clat percentiles (usec): 00:37:57.224 | 1.00th=[25560], 5.00th=[30540], 10.00th=[39060], 20.00th=[43254], 00:37:57.224 | 30.00th=[43779], 40.00th=[44303], 50.00th=[44303], 60.00th=[44827], 00:37:57.224 | 70.00th=[44827], 80.00th=[45351], 90.00th=[46400], 95.00th=[47449], 00:37:57.224 | 99.00th=[57934], 99.50th=[65274], 99.90th=[87557], 99.95th=[87557], 00:37:57.224 | 99.99th=[87557] 00:37:57.224 bw ( KiB/s): min= 1280, max= 1712, per=4.26%, avg=1449.37, stdev=104.24, samples=19 00:37:57.224 iops : min= 320, max= 428, avg=362.32, stdev=26.07, samples=19 00:37:57.224 lat (msec) : 20=0.44%, 50=96.42%, 100=3.14% 00:37:57.224 cpu : usr=97.98%, sys=1.49%, ctx=14, majf=0, minf=1633 00:37:57.224 IO depths : 1=4.5%, 2=9.1%, 4=19.4%, 8=58.2%, 16=8.8%, 32=0.0%, >=64=0.0% 00:37:57.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.224 complete : 0=0.0%, 4=92.7%, 8=2.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.224 issued rwts: total=3634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.224 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.224 filename0: (groupid=0, jobs=1): err= 0: pid=838006: Fri Jul 26 16:43:15 2024 00:37:57.224 read: IOPS=355, BW=1422KiB/s (1456kB/s)(13.9MiB/10040msec) 00:37:57.224 slat (nsec): min=11215, max=94771, avg=20439.09, stdev=8166.46 00:37:57.224 clat (usec): min=29917, max=62371, avg=44836.20, stdev=1875.10 00:37:57.224 lat (usec): min=29949, max=62417, avg=44856.64, stdev=1874.29 00:37:57.224 clat percentiles (usec): 00:37:57.224 | 1.00th=[33162], 5.00th=[43254], 10.00th=[43779], 20.00th=[43779], 00:37:57.224 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:37:57.224 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[46924], 00:37:57.224 | 99.00th=[47973], 99.50th=[48497], 99.90th=[60031], 99.95th=[62129], 00:37:57.224 | 99.99th=[62129] 00:37:57.224 bw ( KiB/s): min= 1408, max= 1536, per=4.17%, avg=1420.80, stdev=39.40, samples=20 00:37:57.224 iops : min= 352, max= 384, avg=355.20, stdev= 9.85, samples=20 00:37:57.224 lat (msec) : 50=99.78%, 100=0.22% 00:37:57.224 cpu : usr=97.99%, sys=1.55%, ctx=21, majf=0, minf=1637 00:37:57.224 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:57.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 issued rwts: total=3568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.225 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.225 filename0: (groupid=0, jobs=1): err= 0: pid=838007: Fri Jul 26 16:43:15 2024 00:37:57.225 read: IOPS=353, BW=1413KiB/s (1446kB/s)(13.8MiB/10013msec) 00:37:57.225 slat (usec): min=11, max=192, avg=30.04, stdev= 8.82 00:37:57.225 clat (usec): min=24653, max=90239, avg=44993.87, stdev=3478.10 00:37:57.225 lat (usec): min=24677, max=90279, avg=45023.91, stdev=3477.72 00:37:57.225 clat percentiles (usec): 00:37:57.225 | 1.00th=[40109], 5.00th=[43254], 10.00th=[43779], 20.00th=[43779], 00:37:57.225 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:37:57.225 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[46924], 00:37:57.225 | 99.00th=[48497], 99.50th=[53740], 99.90th=[90702], 99.95th=[90702], 00:37:57.225 | 99.99th=[90702] 00:37:57.225 bw ( KiB/s): min= 1280, max= 1536, per=4.14%, avg=1407.84, stdev=42.67, samples=19 00:37:57.225 iops : min= 320, max= 384, avg=351.95, stdev=10.67, samples=19 00:37:57.225 lat (msec) : 50=99.49%, 100=0.51% 00:37:57.225 cpu : usr=98.10%, sys=1.42%, ctx=20, majf=0, minf=1636 00:37:57.225 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:57.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.225 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.225 filename0: (groupid=0, jobs=1): err= 0: pid=838008: Fri Jul 26 16:43:15 2024 00:37:57.225 read: IOPS=352, BW=1412KiB/s (1445kB/s)(13.8MiB/10009msec) 00:37:57.225 slat (nsec): min=11530, max=96091, avg=37451.35, stdev=12411.81 00:37:57.225 clat (usec): min=26863, max=93034, avg=45008.84, stdev=4451.11 00:37:57.225 lat (usec): min=26875, max=93063, avg=45046.30, stdev=4450.50 00:37:57.225 clat percentiles (usec): 00:37:57.225 | 1.00th=[31851], 5.00th=[42730], 10.00th=[43254], 20.00th=[43779], 00:37:57.225 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44827], 60.00th=[44827], 00:37:57.225 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[46924], 00:37:57.225 | 99.00th=[58983], 99.50th=[74974], 99.90th=[92799], 99.95th=[92799], 00:37:57.225 | 99.99th=[92799] 00:37:57.225 bw ( KiB/s): min= 1152, max= 1536, per=4.13%, avg=1406.32, stdev=76.34, samples=19 00:37:57.225 iops : min= 288, max= 384, avg=351.58, stdev=19.09, samples=19 00:37:57.225 lat (msec) : 50=97.99%, 100=2.01% 00:37:57.225 cpu : usr=97.77%, sys=1.65%, ctx=92, majf=0, minf=1636 00:37:57.225 IO depths : 1=4.2%, 2=10.2%, 4=23.9%, 8=53.3%, 16=8.4%, 32=0.0%, >=64=0.0% 00:37:57.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 issued rwts: total=3532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.225 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.225 filename0: (groupid=0, jobs=1): err= 0: pid=838009: Fri Jul 26 16:43:15 2024 00:37:57.225 read: IOPS=353, BW=1416KiB/s (1450kB/s)(13.9MiB/10034msec) 00:37:57.225 slat (usec): min=8, max=345, avg=38.18, stdev=11.86 00:37:57.225 clat (usec): min=29262, max=70518, avg=44859.95, stdev=2331.52 00:37:57.225 lat (usec): min=29295, max=70548, avg=44898.13, stdev=2329.13 00:37:57.225 clat percentiles (usec): 00:37:57.225 | 1.00th=[40633], 5.00th=[43254], 10.00th=[43254], 20.00th=[43779], 00:37:57.225 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:37:57.225 | 70.00th=[45351], 80.00th=[45351], 90.00th=[46400], 95.00th=[46924], 00:37:57.225 | 99.00th=[48497], 99.50th=[63701], 99.90th=[70779], 99.95th=[70779], 00:37:57.225 | 99.99th=[70779] 00:37:57.225 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1414.74, stdev=51.80, samples=19 00:37:57.225 iops : min= 320, max= 384, avg=353.68, stdev=12.95, samples=19 00:37:57.225 lat (msec) : 50=99.32%, 100=0.68% 00:37:57.225 cpu : usr=96.22%, sys=2.58%, ctx=198, majf=0, minf=1635 00:37:57.225 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:57.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.225 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.225 filename0: (groupid=0, jobs=1): err= 0: pid=838010: Fri Jul 26 16:43:15 2024 00:37:57.225 read: IOPS=354, BW=1418KiB/s (1452kB/s)(13.9MiB/10020msec) 00:37:57.225 slat (nsec): min=11351, max=78380, avg=26654.50, stdev=10922.87 00:37:57.225 clat (usec): min=17627, max=74591, avg=44884.49, stdev=3143.14 00:37:57.225 lat (usec): min=17644, max=74617, avg=44911.14, stdev=3141.81 00:37:57.225 clat percentiles (usec): 00:37:57.225 | 1.00th=[39584], 5.00th=[43254], 10.00th=[43254], 20.00th=[43779], 00:37:57.225 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:37:57.225 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[46924], 00:37:57.225 | 99.00th=[49021], 99.50th=[70779], 99.90th=[74974], 99.95th=[74974], 00:37:57.225 | 99.99th=[74974] 00:37:57.225 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1414.74, stdev=78.26, samples=19 00:37:57.225 iops : min= 320, max= 384, avg=353.68, stdev=19.56, samples=19 00:37:57.225 lat (msec) : 20=0.06%, 50=99.27%, 100=0.68% 00:37:57.225 cpu : usr=95.60%, sys=2.64%, ctx=200, majf=0, minf=1635 00:37:57.225 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:57.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.225 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.225 filename0: (groupid=0, jobs=1): err= 0: pid=838011: Fri Jul 26 16:43:15 2024 00:37:57.225 read: IOPS=352, BW=1412KiB/s (1446kB/s)(13.8MiB/10018msec) 00:37:57.225 slat (usec): min=11, max=107, avg=34.94, stdev=13.12 00:37:57.225 clat (msec): min=24, max=101, avg=44.98, stdev= 3.84 00:37:57.225 lat (msec): min=24, max=101, avg=45.01, stdev= 3.84 00:37:57.225 clat percentiles (msec): 00:37:57.225 | 1.00th=[ 41], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:37:57.225 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:37:57.225 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 47], 00:37:57.225 | 99.00th=[ 49], 99.50th=[ 51], 99.90th=[ 95], 99.95th=[ 102], 00:37:57.225 | 99.99th=[ 102] 00:37:57.225 bw ( KiB/s): min= 1152, max= 1536, per=4.14%, avg=1407.68, stdev=73.91, samples=19 00:37:57.225 iops : min= 288, max= 384, avg=351.89, stdev=18.48, samples=19 00:37:57.225 lat (msec) : 50=99.49%, 100=0.45%, 250=0.06% 00:37:57.225 cpu : usr=97.61%, sys=1.82%, ctx=54, majf=0, minf=1633 00:37:57.225 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:57.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.225 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.225 filename1: (groupid=0, jobs=1): err= 0: pid=838012: Fri Jul 26 16:43:15 2024 00:37:57.225 read: IOPS=356, BW=1426KiB/s (1460kB/s)(13.9MiB/10009msec) 00:37:57.225 slat (usec): min=7, max=120, avg=27.44, stdev=16.11 00:37:57.225 clat (usec): min=25447, max=52789, avg=44637.89, stdev=2505.81 00:37:57.225 lat (usec): min=25455, max=52859, avg=44665.34, stdev=2504.00 00:37:57.225 clat percentiles (usec): 00:37:57.225 | 1.00th=[30540], 5.00th=[42730], 10.00th=[43254], 20.00th=[43779], 00:37:57.225 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:37:57.225 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[46924], 00:37:57.225 | 99.00th=[49021], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:37:57.225 | 99.99th=[52691] 00:37:57.225 bw ( KiB/s): min= 1280, max= 1536, per=4.20%, avg=1428.21, stdev=62.84, samples=19 00:37:57.225 iops : min= 320, max= 384, avg=357.05, stdev=15.71, samples=19 00:37:57.225 lat (msec) : 50=99.10%, 100=0.90% 00:37:57.225 cpu : usr=95.54%, sys=2.54%, ctx=70, majf=0, minf=1634 00:37:57.225 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:37:57.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 issued rwts: total=3568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.225 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.225 filename1: (groupid=0, jobs=1): err= 0: pid=838013: Fri Jul 26 16:43:15 2024 00:37:57.225 read: IOPS=354, BW=1417KiB/s (1451kB/s)(13.9MiB/10028msec) 00:37:57.225 slat (nsec): min=8158, max=96635, avg=44609.05, stdev=13039.38 00:37:57.225 clat (usec): min=29907, max=81495, avg=44773.48, stdev=2070.73 00:37:57.225 lat (usec): min=29942, max=81524, avg=44818.09, stdev=2066.92 00:37:57.225 clat percentiles (usec): 00:37:57.225 | 1.00th=[40633], 5.00th=[42730], 10.00th=[43254], 20.00th=[43779], 00:37:57.225 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44827], 60.00th=[44827], 00:37:57.225 | 70.00th=[45351], 80.00th=[45351], 90.00th=[46400], 95.00th=[46924], 00:37:57.225 | 99.00th=[47973], 99.50th=[49021], 99.90th=[64750], 99.95th=[81265], 00:37:57.225 | 99.99th=[81265] 00:37:57.225 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1414.74, stdev=51.80, samples=19 00:37:57.225 iops : min= 320, max= 384, avg=353.68, stdev=12.95, samples=19 00:37:57.225 lat (msec) : 50=99.55%, 100=0.45% 00:37:57.225 cpu : usr=95.78%, sys=2.37%, ctx=76, majf=0, minf=1636 00:37:57.225 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:57.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.225 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.225 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.225 filename1: (groupid=0, jobs=1): err= 0: pid=838014: Fri Jul 26 16:43:15 2024 00:37:57.225 read: IOPS=354, BW=1417KiB/s (1451kB/s)(13.9MiB/10021msec) 00:37:57.225 slat (nsec): min=11677, max=85126, avg=32483.36, stdev=14444.51 00:37:57.225 clat (usec): min=18966, max=76980, avg=44912.96, stdev=3023.03 00:37:57.225 lat (usec): min=18988, max=76995, avg=44945.45, stdev=3019.97 00:37:57.225 clat percentiles (usec): 00:37:57.225 | 1.00th=[40633], 5.00th=[43254], 10.00th=[43254], 20.00th=[43779], 00:37:57.225 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:37:57.226 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[46924], 00:37:57.226 | 99.00th=[49021], 99.50th=[74974], 99.90th=[74974], 99.95th=[77071], 00:37:57.226 | 99.99th=[77071] 00:37:57.226 bw ( KiB/s): min= 1264, max= 1536, per=4.15%, avg=1413.89, stdev=75.84, samples=19 00:37:57.226 iops : min= 316, max= 384, avg=353.47, stdev=18.96, samples=19 00:37:57.226 lat (msec) : 20=0.06%, 50=99.32%, 100=0.62% 00:37:57.226 cpu : usr=97.94%, sys=1.57%, ctx=15, majf=0, minf=1636 00:37:57.226 IO depths : 1=1.2%, 2=7.4%, 4=25.0%, 8=55.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:37:57.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.226 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.226 issued rwts: total=3550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.226 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.226 filename1: (groupid=0, jobs=1): err= 0: pid=838015: Fri Jul 26 16:43:15 2024 00:37:57.226 read: IOPS=355, BW=1421KiB/s (1455kB/s)(13.9MiB/10041msec) 00:37:57.226 slat (usec): min=9, max=111, avg=38.53, stdev=10.25 00:37:57.226 clat (usec): min=27499, max=65739, avg=44694.35, stdev=1881.78 00:37:57.226 lat (usec): min=27534, max=65775, avg=44732.88, stdev=1881.44 00:37:57.226 clat percentiles (usec): 00:37:57.226 | 1.00th=[33424], 5.00th=[43254], 10.00th=[43779], 20.00th=[43779], 00:37:57.226 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:37:57.226 | 70.00th=[45351], 80.00th=[45351], 90.00th=[46400], 95.00th=[46924], 00:37:57.226 | 99.00th=[47973], 99.50th=[48497], 99.90th=[56886], 99.95th=[65799], 00:37:57.226 | 99.99th=[65799] 00:37:57.226 bw ( KiB/s): min= 1408, max= 1536, per=4.17%, avg=1420.80, stdev=39.40, samples=20 00:37:57.226 iops : min= 352, max= 384, avg=355.20, stdev= 9.85, samples=20 00:37:57.226 lat (msec) : 50=99.83%, 100=0.17% 00:37:57.226 cpu : usr=90.73%, sys=4.35%, ctx=138, majf=0, minf=1637 00:37:57.226 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:57.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.226 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.226 issued rwts: total=3568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.226 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.226 filename1: (groupid=0, jobs=1): err= 0: pid=838016: Fri Jul 26 16:43:15 2024 00:37:57.226 read: IOPS=354, BW=1418KiB/s (1452kB/s)(13.9MiB/10023msec) 00:37:57.226 slat (nsec): min=12192, max=93636, avg=41270.91, stdev=12149.25 00:37:57.226 clat (usec): min=27910, max=64103, avg=44771.59, stdev=2114.90 00:37:57.226 lat (usec): min=27935, max=64141, avg=44812.86, stdev=2114.16 00:37:57.226 clat percentiles (usec): 00:37:57.226 | 1.00th=[41157], 5.00th=[43254], 10.00th=[43254], 20.00th=[43779], 00:37:57.226 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44827], 60.00th=[44827], 00:37:57.226 | 70.00th=[45351], 80.00th=[45351], 90.00th=[45876], 95.00th=[46924], 00:37:57.226 | 99.00th=[47973], 99.50th=[60031], 99.90th=[64226], 99.95th=[64226], 00:37:57.226 | 99.99th=[64226] 00:37:57.226 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1414.74, stdev=51.80, samples=19 00:37:57.226 iops : min= 320, max= 384, avg=353.68, stdev=12.95, samples=19 00:37:57.226 lat (msec) : 50=99.44%, 100=0.56% 00:37:57.226 cpu : usr=98.13%, sys=1.39%, ctx=15, majf=0, minf=1635 00:37:57.226 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:57.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.226 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.226 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.226 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.226 filename1: (groupid=0, jobs=1): err= 0: pid=838017: Fri Jul 26 16:43:15 2024 00:37:57.226 read: IOPS=356, BW=1427KiB/s (1461kB/s)(13.9MiB/10004msec) 00:37:57.226 slat (nsec): min=8322, max=86681, avg=29293.69, stdev=9944.83 00:37:57.226 clat (usec): min=21046, max=49137, avg=44598.33, stdev=2519.14 00:37:57.226 lat (usec): min=21055, max=49162, avg=44627.62, stdev=2518.56 00:37:57.226 clat percentiles (usec): 00:37:57.226 | 1.00th=[30802], 5.00th=[43254], 10.00th=[43254], 20.00th=[43779], 00:37:57.226 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:37:57.226 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[46924], 00:37:57.226 | 99.00th=[48497], 99.50th=[48497], 99.90th=[49021], 99.95th=[49021], 00:37:57.226 | 99.99th=[49021] 00:37:57.226 bw ( KiB/s): min= 1408, max= 1536, per=4.20%, avg=1428.21, stdev=47.95, samples=19 00:37:57.226 iops : min= 352, max= 384, avg=357.05, stdev=11.99, samples=19 00:37:57.226 lat (msec) : 50=100.00% 00:37:57.226 cpu : usr=89.38%, sys=5.25%, ctx=800, majf=0, minf=1637 00:37:57.226 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:57.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.226 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.226 issued rwts: total=3568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.226 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.226 filename1: (groupid=0, jobs=1): err= 0: pid=838018: Fri Jul 26 16:43:15 2024 00:37:57.226 read: IOPS=353, BW=1412KiB/s (1446kB/s)(13.8MiB/10015msec) 00:37:57.226 slat (nsec): min=8415, max=78020, avg=31907.86, stdev=10531.61 00:37:57.226 clat (usec): min=36152, max=90701, avg=45040.87, stdev=2931.26 00:37:57.226 lat (usec): min=36179, max=90736, avg=45072.78, stdev=2929.54 00:37:57.226 clat percentiles (usec): 00:37:57.226 | 1.00th=[42206], 5.00th=[43254], 10.00th=[43779], 20.00th=[43779], 00:37:57.226 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:37:57.226 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[46924], 00:37:57.226 | 99.00th=[48497], 99.50th=[53216], 99.90th=[84411], 99.95th=[90702], 00:37:57.226 | 99.99th=[90702] 00:37:57.226 bw ( KiB/s): min= 1280, max= 1536, per=4.14%, avg=1408.00, stdev=42.67, samples=19 00:37:57.226 iops : min= 320, max= 384, avg=352.00, stdev=10.67, samples=19 00:37:57.226 lat (msec) : 50=99.32%, 100=0.68% 00:37:57.226 cpu : usr=91.89%, sys=4.05%, ctx=398, majf=0, minf=1636 00:37:57.226 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:57.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.226 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.226 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.226 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.226 filename1: (groupid=0, jobs=1): err= 0: pid=838019: Fri Jul 26 16:43:15 2024 00:37:57.226 read: IOPS=354, BW=1420KiB/s (1454kB/s)(13.9MiB/10009msec) 00:37:57.226 slat (usec): min=8, max=101, avg=24.00, stdev=15.78 00:37:57.226 clat (usec): min=29905, max=56517, avg=44872.50, stdev=1836.32 00:37:57.226 lat (usec): min=29934, max=56542, avg=44896.50, stdev=1830.41 00:37:57.226 clat percentiles (usec): 00:37:57.226 | 1.00th=[42206], 5.00th=[42730], 10.00th=[43779], 20.00th=[44303], 00:37:57.226 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:37:57.226 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[46924], 00:37:57.226 | 99.00th=[48497], 99.50th=[49021], 99.90th=[56361], 99.95th=[56361], 00:37:57.226 | 99.99th=[56361] 00:37:57.226 bw ( KiB/s): min= 1280, max= 1536, per=4.18%, avg=1421.47, stdev=72.59, samples=19 00:37:57.226 iops : min= 320, max= 384, avg=355.37, stdev=18.15, samples=19 00:37:57.226 lat (msec) : 50=99.55%, 100=0.45% 00:37:57.226 cpu : usr=97.99%, sys=1.53%, ctx=18, majf=0, minf=1635 00:37:57.226 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:57.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.226 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.226 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.226 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.226 filename2: (groupid=0, jobs=1): err= 0: pid=838020: Fri Jul 26 16:43:15 2024 00:37:57.226 read: IOPS=353, BW=1413KiB/s (1447kB/s)(13.8MiB/10012msec) 00:37:57.226 slat (nsec): min=11170, max=86158, avg=39715.13, stdev=12994.31 00:37:57.226 clat (usec): min=28067, max=88936, avg=44936.16, stdev=3371.48 00:37:57.226 lat (usec): min=28098, max=88965, avg=44975.88, stdev=3368.91 00:37:57.226 clat percentiles (usec): 00:37:57.226 | 1.00th=[42206], 5.00th=[43254], 10.00th=[43254], 20.00th=[43779], 00:37:57.226 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:37:57.226 | 70.00th=[45351], 80.00th=[45351], 90.00th=[45876], 95.00th=[46924], 00:37:57.226 | 99.00th=[49021], 99.50th=[52167], 99.90th=[88605], 99.95th=[88605], 00:37:57.226 | 99.99th=[88605] 00:37:57.226 bw ( KiB/s): min= 1282, max= 1536, per=4.14%, avg=1407.95, stdev=42.34, samples=19 00:37:57.226 iops : min= 320, max= 384, avg=351.95, stdev=10.67, samples=19 00:37:57.226 lat (msec) : 50=99.10%, 100=0.90% 00:37:57.226 cpu : usr=97.93%, sys=1.58%, ctx=17, majf=0, minf=1635 00:37:57.226 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:57.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.226 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.226 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.226 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.226 filename2: (groupid=0, jobs=1): err= 0: pid=838021: Fri Jul 26 16:43:15 2024 00:37:57.226 read: IOPS=354, BW=1417KiB/s (1451kB/s)(13.9MiB/10028msec) 00:37:57.226 slat (usec): min=9, max=100, avg=47.19, stdev=13.73 00:37:57.226 clat (usec): min=29765, max=64828, avg=44739.89, stdev=1995.76 00:37:57.226 lat (usec): min=29813, max=64857, avg=44787.08, stdev=1991.95 00:37:57.226 clat percentiles (usec): 00:37:57.226 | 1.00th=[40633], 5.00th=[42730], 10.00th=[43254], 20.00th=[43779], 00:37:57.226 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44827], 60.00th=[44827], 00:37:57.226 | 70.00th=[45351], 80.00th=[45351], 90.00th=[46400], 95.00th=[46924], 00:37:57.226 | 99.00th=[48497], 99.50th=[57410], 99.90th=[64750], 99.95th=[64750], 00:37:57.226 | 99.99th=[64750] 00:37:57.226 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1414.74, stdev=51.80, samples=19 00:37:57.226 iops : min= 320, max= 384, avg=353.68, stdev=12.95, samples=19 00:37:57.226 lat (msec) : 50=99.44%, 100=0.56% 00:37:57.226 cpu : usr=95.68%, sys=2.48%, ctx=67, majf=0, minf=1633 00:37:57.226 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:57.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.226 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.227 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.227 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.227 filename2: (groupid=0, jobs=1): err= 0: pid=838022: Fri Jul 26 16:43:15 2024 00:37:57.227 read: IOPS=355, BW=1420KiB/s (1455kB/s)(13.9MiB/10042msec) 00:37:57.227 slat (usec): min=8, max=123, avg=42.14, stdev=17.94 00:37:57.227 clat (usec): min=30206, max=63415, avg=44681.80, stdev=2117.58 00:37:57.227 lat (usec): min=30234, max=63462, avg=44723.94, stdev=2115.69 00:37:57.227 clat percentiles (usec): 00:37:57.227 | 1.00th=[32637], 5.00th=[42730], 10.00th=[43254], 20.00th=[43779], 00:37:57.227 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:37:57.227 | 70.00th=[45351], 80.00th=[45351], 90.00th=[46400], 95.00th=[46924], 00:37:57.227 | 99.00th=[47973], 99.50th=[52691], 99.90th=[59507], 99.95th=[63177], 00:37:57.227 | 99.99th=[63177] 00:37:57.227 bw ( KiB/s): min= 1392, max= 1536, per=4.18%, avg=1421.47, stdev=41.06, samples=19 00:37:57.227 iops : min= 348, max= 384, avg=355.37, stdev=10.26, samples=19 00:37:57.227 lat (msec) : 50=99.44%, 100=0.56% 00:37:57.227 cpu : usr=97.78%, sys=1.69%, ctx=16, majf=0, minf=1636 00:37:57.227 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:57.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.227 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.227 issued rwts: total=3566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.227 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.227 filename2: (groupid=0, jobs=1): err= 0: pid=838023: Fri Jul 26 16:43:15 2024 00:37:57.227 read: IOPS=354, BW=1417KiB/s (1451kB/s)(13.9MiB/10028msec) 00:37:57.227 slat (usec): min=13, max=148, avg=42.31, stdev=12.18 00:37:57.227 clat (usec): min=27855, max=69185, avg=44777.05, stdev=2256.90 00:37:57.227 lat (usec): min=27877, max=69242, avg=44819.37, stdev=2255.46 00:37:57.227 clat percentiles (usec): 00:37:57.227 | 1.00th=[42206], 5.00th=[43254], 10.00th=[43254], 20.00th=[43779], 00:37:57.227 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44827], 60.00th=[44827], 00:37:57.227 | 70.00th=[45351], 80.00th=[45351], 90.00th=[46400], 95.00th=[46924], 00:37:57.227 | 99.00th=[47973], 99.50th=[48497], 99.90th=[68682], 99.95th=[68682], 00:37:57.227 | 99.99th=[68682] 00:37:57.227 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1414.74, stdev=67.11, samples=19 00:37:57.227 iops : min= 320, max= 384, avg=353.68, stdev=16.78, samples=19 00:37:57.227 lat (msec) : 50=99.55%, 100=0.45% 00:37:57.227 cpu : usr=97.80%, sys=1.73%, ctx=15, majf=0, minf=1633 00:37:57.227 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:57.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.227 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.227 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.227 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.227 filename2: (groupid=0, jobs=1): err= 0: pid=838024: Fri Jul 26 16:43:15 2024 00:37:57.227 read: IOPS=358, BW=1436KiB/s (1470kB/s)(14.0MiB/10020msec) 00:37:57.227 slat (nsec): min=11158, max=95871, avg=39837.39, stdev=16954.06 00:37:57.227 clat (msec): min=13, max=100, avg=44.29, stdev= 4.95 00:37:57.227 lat (msec): min=13, max=100, avg=44.33, stdev= 4.95 00:37:57.227 clat percentiles (msec): 00:37:57.227 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 44], 00:37:57.227 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:37:57.227 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 47], 95.00th=[ 47], 00:37:57.227 | 99.00th=[ 60], 99.50th=[ 73], 99.90th=[ 75], 99.95th=[ 101], 00:37:57.227 | 99.99th=[ 102] 00:37:57.227 bw ( KiB/s): min= 1264, max= 1664, per=4.21%, avg=1433.26, stdev=77.39, samples=19 00:37:57.227 iops : min= 316, max= 416, avg=358.32, stdev=19.35, samples=19 00:37:57.227 lat (msec) : 20=0.33%, 50=97.58%, 100=2.03%, 250=0.06% 00:37:57.227 cpu : usr=97.49%, sys=1.70%, ctx=111, majf=0, minf=1634 00:37:57.227 IO depths : 1=2.4%, 2=6.2%, 4=17.2%, 8=62.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:37:57.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.227 complete : 0=0.0%, 4=92.6%, 8=3.3%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.227 issued rwts: total=3596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.227 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.227 filename2: (groupid=0, jobs=1): err= 0: pid=838025: Fri Jul 26 16:43:15 2024 00:37:57.227 read: IOPS=355, BW=1420KiB/s (1454kB/s)(13.9MiB/10009msec) 00:37:57.227 slat (nsec): min=11461, max=85195, avg=36948.40, stdev=16607.42 00:37:57.227 clat (usec): min=15389, max=86429, avg=44766.99, stdev=5489.29 00:37:57.227 lat (usec): min=15403, max=86468, avg=44803.94, stdev=5488.91 00:37:57.227 clat percentiles (usec): 00:37:57.227 | 1.00th=[26608], 5.00th=[36439], 10.00th=[42730], 20.00th=[43779], 00:37:57.227 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44827], 60.00th=[44827], 00:37:57.227 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46924], 95.00th=[48497], 00:37:57.227 | 99.00th=[71828], 99.50th=[74974], 99.90th=[86508], 99.95th=[86508], 00:37:57.227 | 99.99th=[86508] 00:37:57.227 bw ( KiB/s): min= 1280, max= 1600, per=4.16%, avg=1415.58, stdev=68.38, samples=19 00:37:57.227 iops : min= 320, max= 400, avg=353.89, stdev=17.09, samples=19 00:37:57.227 lat (msec) : 20=0.06%, 50=95.47%, 100=4.47% 00:37:57.227 cpu : usr=97.99%, sys=1.53%, ctx=15, majf=0, minf=1637 00:37:57.227 IO depths : 1=3.2%, 2=7.6%, 4=18.5%, 8=60.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:37:57.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.227 complete : 0=0.0%, 4=92.7%, 8=2.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.227 issued rwts: total=3554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.227 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.227 filename2: (groupid=0, jobs=1): err= 0: pid=838026: Fri Jul 26 16:43:15 2024 00:37:57.227 read: IOPS=355, BW=1421KiB/s (1455kB/s)(13.9MiB/10040msec) 00:37:57.227 slat (usec): min=9, max=132, avg=61.91, stdev=17.06 00:37:57.227 clat (usec): min=30477, max=57455, avg=44440.72, stdev=1734.02 00:37:57.227 lat (usec): min=30503, max=57509, avg=44502.63, stdev=1736.68 00:37:57.227 clat percentiles (usec): 00:37:57.227 | 1.00th=[35914], 5.00th=[42730], 10.00th=[43254], 20.00th=[43779], 00:37:57.227 | 30.00th=[43779], 40.00th=[44303], 50.00th=[44303], 60.00th=[44827], 00:37:57.227 | 70.00th=[44827], 80.00th=[45351], 90.00th=[45876], 95.00th=[46400], 00:37:57.227 | 99.00th=[47449], 99.50th=[47973], 99.90th=[54264], 99.95th=[57410], 00:37:57.227 | 99.99th=[57410] 00:37:57.227 bw ( KiB/s): min= 1408, max= 1536, per=4.17%, avg=1420.80, stdev=39.40, samples=20 00:37:57.227 iops : min= 352, max= 384, avg=355.20, stdev= 9.85, samples=20 00:37:57.227 lat (msec) : 50=99.78%, 100=0.22% 00:37:57.227 cpu : usr=98.15%, sys=1.31%, ctx=21, majf=0, minf=1635 00:37:57.227 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:57.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.227 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.227 issued rwts: total=3566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.227 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.227 filename2: (groupid=0, jobs=1): err= 0: pid=838027: Fri Jul 26 16:43:15 2024 00:37:57.227 read: IOPS=356, BW=1425KiB/s (1459kB/s)(13.9MiB/10015msec) 00:37:57.227 slat (usec): min=8, max=373, avg=35.69, stdev=21.45 00:37:57.227 clat (usec): min=20596, max=58827, avg=44612.65, stdev=2653.38 00:37:57.227 lat (usec): min=20605, max=58872, avg=44648.34, stdev=2649.25 00:37:57.227 clat percentiles (usec): 00:37:57.227 | 1.00th=[31851], 5.00th=[42730], 10.00th=[43254], 20.00th=[43779], 00:37:57.227 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:37:57.227 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[46924], 00:37:57.227 | 99.00th=[48497], 99.50th=[49021], 99.90th=[57934], 99.95th=[58983], 00:37:57.227 | 99.99th=[58983] 00:37:57.227 bw ( KiB/s): min= 1280, max= 1536, per=4.19%, avg=1426.40, stdev=71.42, samples=20 00:37:57.227 iops : min= 320, max= 384, avg=356.60, stdev=17.85, samples=20 00:37:57.227 lat (msec) : 50=99.55%, 100=0.45% 00:37:57.227 cpu : usr=97.93%, sys=1.57%, ctx=21, majf=0, minf=1637 00:37:57.227 IO depths : 1=1.6%, 2=7.8%, 4=24.8%, 8=54.9%, 16=10.9%, 32=0.0%, >=64=0.0% 00:37:57.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.227 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.227 issued rwts: total=3568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.227 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:57.227 00:37:57.227 Run status group 0 (all jobs): 00:37:57.227 READ: bw=33.2MiB/s (34.8MB/s), 1412KiB/s-1452KiB/s (1445kB/s-1487kB/s), io=334MiB (350MB), run=10004-10042msec 00:37:57.227 ----------------------------------------------------- 00:37:57.227 Suppressions used: 00:37:57.227 count bytes template 00:37:57.227 45 402 /usr/src/fio/parse.c 00:37:57.227 1 8 libtcmalloc_minimal.so 00:37:57.227 1 904 libcrypto.so 00:37:57.227 ----------------------------------------------------- 00:37:57.227 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:57.227 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.228 bdev_null0 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.228 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.487 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.487 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:57.487 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.487 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.487 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.487 16:43:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:57.487 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.487 16:43:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.487 [2024-07-26 16:43:17.003220] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.487 bdev_null1 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:57.487 { 00:37:57.487 "params": { 00:37:57.487 "name": "Nvme$subsystem", 00:37:57.487 "trtype": "$TEST_TRANSPORT", 00:37:57.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:57.487 "adrfam": "ipv4", 00:37:57.487 "trsvcid": "$NVMF_PORT", 00:37:57.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:57.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:57.487 "hdgst": ${hdgst:-false}, 00:37:57.487 "ddgst": ${ddgst:-false} 00:37:57.487 }, 00:37:57.487 "method": "bdev_nvme_attach_controller" 00:37:57.487 } 00:37:57.487 EOF 00:37:57.487 )") 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:57.487 { 00:37:57.487 "params": { 00:37:57.487 "name": "Nvme$subsystem", 00:37:57.487 "trtype": "$TEST_TRANSPORT", 00:37:57.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:57.487 "adrfam": "ipv4", 00:37:57.487 "trsvcid": "$NVMF_PORT", 00:37:57.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:57.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:57.487 "hdgst": ${hdgst:-false}, 00:37:57.487 "ddgst": ${ddgst:-false} 00:37:57.487 }, 00:37:57.487 "method": "bdev_nvme_attach_controller" 00:37:57.487 } 00:37:57.487 EOF 00:37:57.487 )") 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:57.487 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:57.487 "params": { 00:37:57.487 "name": "Nvme0", 00:37:57.487 "trtype": "tcp", 00:37:57.487 "traddr": "10.0.0.2", 00:37:57.487 "adrfam": "ipv4", 00:37:57.487 "trsvcid": "4420", 00:37:57.487 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:57.487 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:57.487 "hdgst": false, 00:37:57.487 "ddgst": false 00:37:57.487 }, 00:37:57.487 "method": "bdev_nvme_attach_controller" 00:37:57.487 },{ 00:37:57.487 "params": { 00:37:57.488 "name": "Nvme1", 00:37:57.488 "trtype": "tcp", 00:37:57.488 "traddr": "10.0.0.2", 00:37:57.488 "adrfam": "ipv4", 00:37:57.488 "trsvcid": "4420", 00:37:57.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:57.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:57.488 "hdgst": false, 00:37:57.488 "ddgst": false 00:37:57.488 }, 00:37:57.488 "method": "bdev_nvme_attach_controller" 00:37:57.488 }' 00:37:57.488 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:57.488 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:57.488 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:37:57.488 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:57.488 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:57.746 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:57.746 ... 00:37:57.746 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:57.746 ... 00:37:57.746 fio-3.35 00:37:57.746 Starting 4 threads 00:37:57.746 EAL: No free 2048 kB hugepages reported on node 1 00:38:04.308 00:38:04.308 filename0: (groupid=0, jobs=1): err= 0: pid=839530: Fri Jul 26 16:43:23 2024 00:38:04.308 read: IOPS=1371, BW=10.7MiB/s (11.2MB/s)(53.6MiB/5004msec) 00:38:04.308 slat (nsec): min=7113, max=75201, avg=21032.26, stdev=8133.72 00:38:04.308 clat (usec): min=1138, max=10759, avg=5761.85, stdev=963.08 00:38:04.308 lat (usec): min=1162, max=10779, avg=5782.89, stdev=961.89 00:38:04.308 clat percentiles (usec): 00:38:04.308 | 1.00th=[ 3556], 5.00th=[ 4817], 10.00th=[ 5014], 20.00th=[ 5276], 00:38:04.308 | 30.00th=[ 5407], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5669], 00:38:04.308 | 70.00th=[ 5800], 80.00th=[ 5997], 90.00th=[ 6783], 95.00th=[ 7963], 00:38:04.308 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[10290], 99.95th=[10421], 00:38:04.308 | 99.99th=[10814] 00:38:04.308 bw ( KiB/s): min=10432, max=11616, per=24.48%, avg=10968.40, stdev=437.90, samples=10 00:38:04.308 iops : min= 1304, max= 1452, avg=1371.00, stdev=54.80, samples=10 00:38:04.308 lat (msec) : 2=0.16%, 4=1.50%, 10=98.13%, 20=0.20% 00:38:04.308 cpu : usr=93.64%, sys=5.26%, ctx=73, majf=0, minf=1636 00:38:04.308 IO depths : 1=0.1%, 2=11.2%, 4=62.0%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:04.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.308 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.308 issued rwts: total=6862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:04.308 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:04.308 filename0: (groupid=0, jobs=1): err= 0: pid=839531: Fri Jul 26 16:43:23 2024 00:38:04.308 read: IOPS=1389, BW=10.9MiB/s (11.4MB/s)(54.3MiB/5002msec) 00:38:04.308 slat (nsec): min=7418, max=63945, avg=19760.74, stdev=8683.06 00:38:04.308 clat (usec): min=1284, max=10846, avg=5693.43, stdev=934.26 00:38:04.308 lat (usec): min=1303, max=10870, avg=5713.19, stdev=933.56 00:38:04.308 clat percentiles (usec): 00:38:04.308 | 1.00th=[ 3294], 5.00th=[ 4424], 10.00th=[ 4752], 20.00th=[ 5145], 00:38:04.308 | 30.00th=[ 5342], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5669], 00:38:04.308 | 70.00th=[ 5866], 80.00th=[ 6194], 90.00th=[ 6718], 95.00th=[ 7439], 00:38:04.308 | 99.00th=[ 8848], 99.50th=[ 9241], 99.90th=[10552], 99.95th=[10552], 00:38:04.308 | 99.99th=[10814] 00:38:04.308 bw ( KiB/s): min= 9440, max=11664, per=24.75%, avg=11086.22, stdev=726.13, samples=9 00:38:04.308 iops : min= 1180, max= 1458, avg=1385.78, stdev=90.77, samples=9 00:38:04.308 lat (msec) : 2=0.14%, 4=2.06%, 10=97.64%, 20=0.16% 00:38:04.308 cpu : usr=94.86%, sys=4.60%, ctx=8, majf=0, minf=1632 00:38:04.308 IO depths : 1=0.1%, 2=12.7%, 4=58.4%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:04.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.308 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.308 issued rwts: total=6948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:04.308 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:04.308 filename1: (groupid=0, jobs=1): err= 0: pid=839532: Fri Jul 26 16:43:23 2024 00:38:04.308 read: IOPS=1372, BW=10.7MiB/s (11.2MB/s)(53.6MiB/5001msec) 00:38:04.308 slat (nsec): min=6779, max=63719, avg=19474.35, stdev=8572.29 00:38:04.308 clat (usec): min=1251, max=13531, avg=5767.72, stdev=871.13 00:38:04.308 lat (usec): min=1274, max=13554, avg=5787.19, stdev=870.43 00:38:04.308 clat percentiles (usec): 00:38:04.308 | 1.00th=[ 4015], 5.00th=[ 4817], 10.00th=[ 5080], 20.00th=[ 5342], 00:38:04.308 | 30.00th=[ 5407], 40.00th=[ 5538], 50.00th=[ 5604], 60.00th=[ 5735], 00:38:04.308 | 70.00th=[ 5866], 80.00th=[ 5997], 90.00th=[ 6783], 95.00th=[ 7635], 00:38:04.308 | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[10421], 99.95th=[13435], 00:38:04.308 | 99.99th=[13566] 00:38:04.308 bw ( KiB/s): min= 9504, max=11824, per=24.42%, avg=10939.11, stdev=732.82, samples=9 00:38:04.308 iops : min= 1188, max= 1478, avg=1367.33, stdev=91.66, samples=9 00:38:04.308 lat (msec) : 2=0.13%, 4=0.85%, 10=98.72%, 20=0.31% 00:38:04.308 cpu : usr=94.50%, sys=4.94%, ctx=9, majf=0, minf=1637 00:38:04.308 IO depths : 1=0.1%, 2=7.4%, 4=63.2%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:04.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.308 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.308 issued rwts: total=6863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:04.308 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:04.308 filename1: (groupid=0, jobs=1): err= 0: pid=839533: Fri Jul 26 16:43:23 2024 00:38:04.308 read: IOPS=1468, BW=11.5MiB/s (12.0MB/s)(57.4MiB/5005msec) 00:38:04.308 slat (usec): min=7, max=391, avg=16.37, stdev= 8.25 00:38:04.308 clat (usec): min=1308, max=9657, avg=5389.97, stdev=813.84 00:38:04.308 lat (usec): min=1327, max=9679, avg=5406.34, stdev=813.85 00:38:04.308 clat percentiles (usec): 00:38:04.308 | 1.00th=[ 3195], 5.00th=[ 4178], 10.00th=[ 4424], 20.00th=[ 4817], 00:38:04.308 | 30.00th=[ 5145], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 00:38:04.308 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5997], 95.00th=[ 6521], 00:38:04.308 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[ 9372], 99.95th=[ 9634], 00:38:04.308 | 99.99th=[ 9634] 00:38:04.308 bw ( KiB/s): min=10800, max=12928, per=26.23%, avg=11750.40, stdev=731.63, samples=10 00:38:04.308 iops : min= 1350, max= 1616, avg=1468.80, stdev=91.45, samples=10 00:38:04.308 lat (msec) : 2=0.04%, 4=3.22%, 10=96.74% 00:38:04.308 cpu : usr=94.30%, sys=5.14%, ctx=9, majf=0, minf=1639 00:38:04.308 IO depths : 1=0.2%, 2=13.3%, 4=59.8%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:04.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.308 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.308 issued rwts: total=7352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:04.308 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:04.308 00:38:04.308 Run status group 0 (all jobs): 00:38:04.308 READ: bw=43.7MiB/s (45.9MB/s), 10.7MiB/s-11.5MiB/s (11.2MB/s-12.0MB/s), io=219MiB (230MB), run=5001-5005msec 00:38:04.875 ----------------------------------------------------- 00:38:04.875 Suppressions used: 00:38:04.875 count bytes template 00:38:04.875 6 52 /usr/src/fio/parse.c 00:38:04.875 1 8 libtcmalloc_minimal.so 00:38:04.875 1 904 libcrypto.so 00:38:04.875 ----------------------------------------------------- 00:38:04.875 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.875 00:38:04.875 real 0m27.852s 00:38:04.875 user 4m33.355s 00:38:04.875 sys 0m8.879s 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:04.875 16:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:04.875 ************************************ 00:38:04.875 END TEST fio_dif_rand_params 00:38:04.875 ************************************ 00:38:04.875 16:43:24 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:04.875 16:43:24 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:04.875 16:43:24 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:04.875 16:43:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:04.875 ************************************ 00:38:04.875 START TEST fio_dif_digest 00:38:04.875 ************************************ 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:04.875 bdev_null0 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:04.875 [2024-07-26 16:43:24.618753] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:04.875 { 00:38:04.875 "params": { 00:38:04.875 "name": "Nvme$subsystem", 00:38:04.875 "trtype": "$TEST_TRANSPORT", 00:38:04.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:04.875 "adrfam": "ipv4", 00:38:04.875 "trsvcid": "$NVMF_PORT", 00:38:04.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:04.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:04.875 "hdgst": ${hdgst:-false}, 00:38:04.875 "ddgst": ${ddgst:-false} 00:38:04.875 }, 00:38:04.875 "method": "bdev_nvme_attach_controller" 00:38:04.875 } 00:38:04.875 EOF 00:38:04.875 )") 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:04.875 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:38:04.876 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:04.876 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:38:04.876 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:38:04.876 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:04.876 "params": { 00:38:04.876 "name": "Nvme0", 00:38:04.876 "trtype": "tcp", 00:38:04.876 "traddr": "10.0.0.2", 00:38:04.876 "adrfam": "ipv4", 00:38:04.876 "trsvcid": "4420", 00:38:04.876 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:04.876 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:04.876 "hdgst": true, 00:38:04.876 "ddgst": true 00:38:04.876 }, 00:38:04.876 "method": "bdev_nvme_attach_controller" 00:38:04.876 }' 00:38:05.135 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:05.135 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:05.135 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:38:05.135 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:05.135 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:05.393 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:05.393 ... 00:38:05.393 fio-3.35 00:38:05.393 Starting 3 threads 00:38:05.393 EAL: No free 2048 kB hugepages reported on node 1 00:38:17.591 00:38:17.591 filename0: (groupid=0, jobs=1): err= 0: pid=840409: Fri Jul 26 16:43:35 2024 00:38:17.591 read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(223MiB/10006msec) 00:38:17.591 slat (nsec): min=11784, max=55725, avg=21398.27, stdev=3860.63 00:38:17.591 clat (usec): min=10262, max=58600, avg=16770.28, stdev=2187.98 00:38:17.591 lat (usec): min=10282, max=58620, avg=16791.68, stdev=2187.93 00:38:17.591 clat percentiles (usec): 00:38:17.591 | 1.00th=[11863], 5.00th=[14484], 10.00th=[15270], 20.00th=[15795], 00:38:17.591 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16909], 60.00th=[17171], 00:38:17.591 | 70.00th=[17433], 80.00th=[17695], 90.00th=[18220], 95.00th=[18744], 00:38:17.591 | 99.00th=[19792], 99.50th=[20317], 99.90th=[57934], 99.95th=[58459], 00:38:17.591 | 99.99th=[58459] 00:38:17.591 bw ( KiB/s): min=20736, max=23808, per=33.87%, avg=22837.89, stdev=741.86, samples=19 00:38:17.591 iops : min= 162, max= 186, avg=178.42, stdev= 5.80, samples=19 00:38:17.591 lat (msec) : 20=99.27%, 50=0.56%, 100=0.17% 00:38:17.591 cpu : usr=90.05%, sys=8.13%, ctx=256, majf=0, minf=1636 00:38:17.591 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:17.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.591 issued rwts: total=1787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:17.591 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:17.591 filename0: (groupid=0, jobs=1): err= 0: pid=840410: Fri Jul 26 16:43:35 2024 00:38:17.591 read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(224MiB/10047msec) 00:38:17.591 slat (nsec): min=5510, max=40316, avg=21029.73, stdev=3378.03 00:38:17.591 clat (usec): min=10120, max=60226, avg=16778.11, stdev=2615.61 00:38:17.591 lat (usec): min=10140, max=60246, avg=16799.14, stdev=2615.51 00:38:17.591 clat percentiles (usec): 00:38:17.591 | 1.00th=[11469], 5.00th=[14222], 10.00th=[15008], 20.00th=[15664], 00:38:17.591 | 30.00th=[16057], 40.00th=[16450], 50.00th=[16712], 60.00th=[16909], 00:38:17.591 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:38:17.591 | 99.00th=[20055], 99.50th=[20841], 99.90th=[58983], 99.95th=[60031], 00:38:17.591 | 99.99th=[60031] 00:38:17.591 bw ( KiB/s): min=19968, max=23808, per=33.96%, avg=22901.40, stdev=901.84, samples=20 00:38:17.591 iops : min= 156, max= 186, avg=178.90, stdev= 7.06, samples=20 00:38:17.591 lat (msec) : 20=98.88%, 50=0.84%, 100=0.28% 00:38:17.591 cpu : usr=92.48%, sys=6.89%, ctx=45, majf=0, minf=1637 00:38:17.591 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:17.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.591 issued rwts: total=1791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:17.591 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:17.591 filename0: (groupid=0, jobs=1): err= 0: pid=840411: Fri Jul 26 16:43:35 2024 00:38:17.591 read: IOPS=170, BW=21.3MiB/s (22.4MB/s)(214MiB/10046msec) 00:38:17.591 slat (nsec): min=5640, max=53118, avg=24485.72, stdev=5822.04 00:38:17.591 clat (usec): min=10113, max=59813, avg=17517.81, stdev=3865.81 00:38:17.591 lat (usec): min=10144, max=59844, avg=17542.30, stdev=3865.75 00:38:17.591 clat percentiles (usec): 00:38:17.591 | 1.00th=[13042], 5.00th=[15139], 10.00th=[15664], 20.00th=[16188], 00:38:17.591 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:38:17.591 | 70.00th=[17957], 80.00th=[18220], 90.00th=[19006], 95.00th=[19530], 00:38:17.591 | 99.00th=[21103], 99.50th=[58459], 99.90th=[58983], 99.95th=[60031], 00:38:17.591 | 99.99th=[60031] 00:38:17.591 bw ( KiB/s): min=19968, max=23040, per=32.52%, avg=21926.40, stdev=918.43, samples=20 00:38:17.591 iops : min= 156, max= 180, avg=171.30, stdev= 7.18, samples=20 00:38:17.591 lat (msec) : 20=97.61%, 50=1.57%, 100=0.82% 00:38:17.591 cpu : usr=86.51%, sys=10.38%, ctx=495, majf=0, minf=1637 00:38:17.591 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:17.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.591 issued rwts: total=1715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:17.591 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:17.591 00:38:17.591 Run status group 0 (all jobs): 00:38:17.591 READ: bw=65.9MiB/s (69.1MB/s), 21.3MiB/s-22.3MiB/s (22.4MB/s-23.4MB/s), io=662MiB (694MB), run=10006-10047msec 00:38:17.591 ----------------------------------------------------- 00:38:17.591 Suppressions used: 00:38:17.591 count bytes template 00:38:17.591 5 44 /usr/src/fio/parse.c 00:38:17.591 1 8 libtcmalloc_minimal.so 00:38:17.591 1 904 libcrypto.so 00:38:17.591 ----------------------------------------------------- 00:38:17.591 00:38:17.591 16:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:17.591 16:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:17.591 16:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:17.591 16:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:17.591 16:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:17.591 16:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:17.591 16:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.591 16:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:17.591 16:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.591 16:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:17.591 16:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.591 16:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:17.591 16:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.591 00:38:17.591 real 0m12.228s 00:38:17.591 user 0m29.033s 00:38:17.591 sys 0m2.984s 00:38:17.591 16:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:17.591 16:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:17.591 ************************************ 00:38:17.591 END TEST fio_dif_digest 00:38:17.591 ************************************ 00:38:17.591 16:43:36 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:17.591 16:43:36 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:17.591 16:43:36 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:17.591 16:43:36 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:38:17.591 16:43:36 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:17.591 16:43:36 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:38:17.591 16:43:36 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:17.591 16:43:36 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:17.591 rmmod nvme_tcp 00:38:17.591 rmmod nvme_fabrics 00:38:17.591 rmmod nvme_keyring 00:38:17.591 16:43:36 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:17.591 16:43:36 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:38:17.591 16:43:36 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:38:17.591 16:43:36 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 833637 ']' 00:38:17.591 16:43:36 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 833637 00:38:17.591 16:43:36 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 833637 ']' 00:38:17.591 16:43:36 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 833637 00:38:17.591 16:43:36 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:38:17.591 16:43:36 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:17.591 16:43:36 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 833637 00:38:17.591 16:43:36 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:17.591 16:43:36 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:17.591 16:43:36 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 833637' 00:38:17.591 killing process with pid 833637 00:38:17.591 16:43:36 nvmf_dif -- common/autotest_common.sh@969 -- # kill 833637 00:38:17.591 16:43:36 nvmf_dif -- common/autotest_common.sh@974 -- # wait 833637 00:38:18.526 16:43:38 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:38:18.526 16:43:38 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:19.460 Waiting for block devices as requested 00:38:19.460 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:38:19.718 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:38:19.718 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:38:19.718 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:38:19.977 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:38:19.977 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:38:19.977 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:38:19.977 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:38:20.235 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:38:20.235 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:38:20.235 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:38:20.235 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:38:20.493 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:38:20.493 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:38:20.493 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:38:20.493 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:38:20.493 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:38:20.752 16:43:40 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:20.752 16:43:40 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:20.752 16:43:40 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:20.752 16:43:40 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:20.752 16:43:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:20.752 16:43:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:20.752 16:43:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.701 16:43:42 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:22.701 00:38:22.701 real 1m15.092s 00:38:22.701 user 6m40.624s 00:38:22.701 sys 0m21.058s 00:38:22.701 16:43:42 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:22.701 16:43:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:22.701 ************************************ 00:38:22.701 END TEST nvmf_dif 00:38:22.701 ************************************ 00:38:22.701 16:43:42 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:22.701 16:43:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:22.701 16:43:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:22.701 16:43:42 -- common/autotest_common.sh@10 -- # set +x 00:38:22.701 ************************************ 00:38:22.701 START TEST nvmf_abort_qd_sizes 00:38:22.701 ************************************ 00:38:22.701 16:43:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:22.959 * Looking for test storage... 00:38:22.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:22.959 16:43:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:22.959 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:22.959 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:38:22.960 16:43:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:24.862 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:24.862 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:24.862 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:24.862 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:24.862 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:24.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:24.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:38:24.862 00:38:24.863 --- 10.0.0.2 ping statistics --- 00:38:24.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:24.863 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:38:24.863 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:24.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:24.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:38:24.863 00:38:24.863 --- 10.0.0.1 ping statistics --- 00:38:24.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:24.863 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:38:24.863 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:24.863 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:38:24.863 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:38:24.863 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:25.799 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:38:25.799 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:38:25.799 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:38:26.058 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:38:26.058 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:38:26.058 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:38:26.058 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:38:26.058 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:38:26.058 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:38:26.058 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:38:26.058 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:38:26.058 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:38:26.058 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:38:26.058 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:38:26.058 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:38:26.058 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:38:26.994 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=845446 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 845446 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 845446 ']' 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:26.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:26.994 16:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:27.253 [2024-07-26 16:43:46.798754] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:27.253 [2024-07-26 16:43:46.798894] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:27.253 EAL: No free 2048 kB hugepages reported on node 1 00:38:27.253 [2024-07-26 16:43:46.936420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:27.511 [2024-07-26 16:43:47.195412] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:27.511 [2024-07-26 16:43:47.195486] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:27.511 [2024-07-26 16:43:47.195523] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:27.511 [2024-07-26 16:43:47.195546] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:27.511 [2024-07-26 16:43:47.195568] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:27.511 [2024-07-26 16:43:47.195685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:27.511 [2024-07-26 16:43:47.195757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:27.511 [2024-07-26 16:43:47.195844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.511 [2024-07-26 16:43:47.195854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:28.078 16:43:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:28.078 ************************************ 00:38:28.078 START TEST spdk_target_abort 00:38:28.078 ************************************ 00:38:28.078 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:38:28.078 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:28.079 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:38:28.079 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.079 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:31.362 spdk_targetn1 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:31.362 [2024-07-26 16:43:50.639120] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:31.362 [2024-07-26 16:43:50.684772] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:31.362 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:31.362 EAL: No free 2048 kB hugepages reported on node 1 00:38:34.646 Initializing NVMe Controllers 00:38:34.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:34.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:34.646 Initialization complete. Launching workers. 00:38:34.646 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8254, failed: 0 00:38:34.646 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1265, failed to submit 6989 00:38:34.646 success 750, unsuccess 515, failed 0 00:38:34.646 16:43:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:34.646 16:43:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:34.646 EAL: No free 2048 kB hugepages reported on node 1 00:38:37.931 Initializing NVMe Controllers 00:38:37.931 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:37.931 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:37.931 Initialization complete. Launching workers. 00:38:37.931 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8345, failed: 0 00:38:37.931 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1264, failed to submit 7081 00:38:37.931 success 315, unsuccess 949, failed 0 00:38:37.931 16:43:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:37.931 16:43:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:37.931 EAL: No free 2048 kB hugepages reported on node 1 00:38:41.214 Initializing NVMe Controllers 00:38:41.214 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:41.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:41.214 Initialization complete. Launching workers. 00:38:41.214 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27217, failed: 0 00:38:41.214 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2790, failed to submit 24427 00:38:41.214 success 240, unsuccess 2550, failed 0 00:38:41.214 16:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:41.214 16:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.214 16:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.214 16:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.214 16:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:41.214 16:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.214 16:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:42.587 16:44:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.587 16:44:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 845446 00:38:42.587 16:44:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 845446 ']' 00:38:42.587 16:44:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 845446 00:38:42.587 16:44:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:38:42.587 16:44:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:42.587 16:44:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 845446 00:38:42.587 16:44:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:42.588 16:44:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:42.588 16:44:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 845446' 00:38:42.588 killing process with pid 845446 00:38:42.588 16:44:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 845446 00:38:42.588 16:44:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 845446 00:38:43.524 00:38:43.524 real 0m15.383s 00:38:43.524 user 0m58.981s 00:38:43.524 sys 0m2.718s 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:43.524 ************************************ 00:38:43.524 END TEST spdk_target_abort 00:38:43.524 ************************************ 00:38:43.524 16:44:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:43.524 16:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:43.524 16:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:43.524 16:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:43.524 ************************************ 00:38:43.524 START TEST kernel_target_abort 00:38:43.524 ************************************ 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:43.524 16:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:44.899 Waiting for block devices as requested 00:38:44.899 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:38:44.899 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:38:44.899 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:38:45.179 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:38:45.179 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:38:45.179 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:38:45.179 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:38:45.179 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:38:45.442 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:38:45.442 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:38:45.442 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:38:45.442 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:38:45.700 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:38:45.700 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:38:45.700 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:38:45.700 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:38:45.959 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:38:46.217 16:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:38:46.217 16:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:46.217 16:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:38:46.217 16:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:38:46.217 16:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:46.217 16:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:38:46.217 16:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:38:46.217 16:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:38:46.476 16:44:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:46.476 No valid GPT data, bailing 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:38:46.476 00:38:46.476 Discovery Log Number of Records 2, Generation counter 2 00:38:46.476 =====Discovery Log Entry 0====== 00:38:46.476 trtype: tcp 00:38:46.476 adrfam: ipv4 00:38:46.476 subtype: current discovery subsystem 00:38:46.476 treq: not specified, sq flow control disable supported 00:38:46.476 portid: 1 00:38:46.476 trsvcid: 4420 00:38:46.476 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:46.476 traddr: 10.0.0.1 00:38:46.476 eflags: none 00:38:46.476 sectype: none 00:38:46.476 =====Discovery Log Entry 1====== 00:38:46.476 trtype: tcp 00:38:46.476 adrfam: ipv4 00:38:46.476 subtype: nvme subsystem 00:38:46.476 treq: not specified, sq flow control disable supported 00:38:46.476 portid: 1 00:38:46.476 trsvcid: 4420 00:38:46.476 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:46.476 traddr: 10.0.0.1 00:38:46.476 eflags: none 00:38:46.476 sectype: none 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:46.476 16:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:46.734 EAL: No free 2048 kB hugepages reported on node 1 00:38:50.014 Initializing NVMe Controllers 00:38:50.014 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:50.014 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:50.014 Initialization complete. Launching workers. 00:38:50.014 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 26923, failed: 0 00:38:50.014 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26923, failed to submit 0 00:38:50.014 success 0, unsuccess 26923, failed 0 00:38:50.014 16:44:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:50.014 16:44:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:50.014 EAL: No free 2048 kB hugepages reported on node 1 00:38:53.293 Initializing NVMe Controllers 00:38:53.293 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:53.293 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:53.293 Initialization complete. Launching workers. 00:38:53.293 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53009, failed: 0 00:38:53.293 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13342, failed to submit 39667 00:38:53.293 success 0, unsuccess 13342, failed 0 00:38:53.293 16:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:53.293 16:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:53.293 EAL: No free 2048 kB hugepages reported on node 1 00:38:56.569 Initializing NVMe Controllers 00:38:56.569 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:56.569 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:56.569 Initialization complete. Launching workers. 00:38:56.569 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 51878, failed: 0 00:38:56.569 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 12950, failed to submit 38928 00:38:56.569 success 0, unsuccess 12950, failed 0 00:38:56.569 16:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:56.569 16:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:56.569 16:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:38:56.569 16:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:56.569 16:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:56.569 16:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:56.569 16:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:56.569 16:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:38:56.569 16:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:38:56.569 16:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:57.135 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:38:57.135 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:38:57.135 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:38:57.394 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:38:57.394 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:38:57.394 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:38:57.394 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:38:57.394 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:38:57.394 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:38:57.394 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:38:57.394 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:38:57.394 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:38:57.394 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:38:57.394 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:38:57.394 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:38:57.394 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:38:58.327 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:38:58.327 00:38:58.327 real 0m14.847s 00:38:58.327 user 0m5.756s 00:38:58.327 sys 0m3.661s 00:38:58.327 16:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:58.327 16:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:58.327 ************************************ 00:38:58.327 END TEST kernel_target_abort 00:38:58.327 ************************************ 00:38:58.327 16:44:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:58.327 16:44:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:58.327 16:44:18 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:58.328 16:44:18 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:38:58.328 16:44:18 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:58.328 16:44:18 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:38:58.328 16:44:18 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:58.328 16:44:18 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:58.328 rmmod nvme_tcp 00:38:58.328 rmmod nvme_fabrics 00:38:58.586 rmmod nvme_keyring 00:38:58.586 16:44:18 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:58.586 16:44:18 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:38:58.586 16:44:18 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:38:58.586 16:44:18 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 845446 ']' 00:38:58.586 16:44:18 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 845446 00:38:58.586 16:44:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 845446 ']' 00:38:58.586 16:44:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 845446 00:38:58.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (845446) - No such process 00:38:58.586 16:44:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 845446 is not found' 00:38:58.586 Process with pid 845446 is not found 00:38:58.586 16:44:18 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:38:58.586 16:44:18 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:59.521 Waiting for block devices as requested 00:38:59.521 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:38:59.779 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:38:59.779 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:38:59.779 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:38:59.779 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:00.037 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:00.037 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:00.037 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:00.037 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:00.295 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:00.295 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:00.295 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:00.295 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:00.553 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:00.554 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:00.554 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:00.554 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:00.813 16:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:00.813 16:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:00.813 16:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:00.813 16:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:00.813 16:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:00.813 16:44:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:00.813 16:44:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:02.715 16:44:22 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:02.715 00:39:02.715 real 0m39.939s 00:39:02.715 user 1m6.940s 00:39:02.715 sys 0m9.592s 00:39:02.715 16:44:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:02.715 16:44:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:02.715 ************************************ 00:39:02.715 END TEST nvmf_abort_qd_sizes 00:39:02.715 ************************************ 00:39:02.715 16:44:22 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:02.715 16:44:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:02.715 16:44:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:02.715 16:44:22 -- common/autotest_common.sh@10 -- # set +x 00:39:02.715 ************************************ 00:39:02.715 START TEST keyring_file 00:39:02.715 ************************************ 00:39:02.715 16:44:22 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:02.972 * Looking for test storage... 00:39:02.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:02.972 16:44:22 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:02.972 16:44:22 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:02.972 16:44:22 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:02.972 16:44:22 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:02.972 16:44:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.972 16:44:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.972 16:44:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.972 16:44:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:02.972 16:44:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@47 -- # : 0 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:02.972 16:44:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:02.972 16:44:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:02.972 16:44:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:02.972 16:44:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:02.972 16:44:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:02.972 16:44:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7Ze9KTniGV 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7Ze9KTniGV 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7Ze9KTniGV 00:39:02.972 16:44:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.7Ze9KTniGV 00:39:02.972 16:44:22 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.k2TN2aMugx 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:02.972 16:44:22 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.k2TN2aMugx 00:39:02.972 16:44:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.k2TN2aMugx 00:39:02.972 16:44:22 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.k2TN2aMugx 00:39:02.972 16:44:22 keyring_file -- keyring/file.sh@30 -- # tgtpid=851661 00:39:02.972 16:44:22 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:02.972 16:44:22 keyring_file -- keyring/file.sh@32 -- # waitforlisten 851661 00:39:02.972 16:44:22 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 851661 ']' 00:39:02.972 16:44:22 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:02.972 16:44:22 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:02.973 16:44:22 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:02.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:02.973 16:44:22 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:02.973 16:44:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:02.973 [2024-07-26 16:44:22.684518] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:02.973 [2024-07-26 16:44:22.684669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851661 ] 00:39:03.230 EAL: No free 2048 kB hugepages reported on node 1 00:39:03.230 [2024-07-26 16:44:22.808159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:03.489 [2024-07-26 16:44:23.035114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:39:04.423 16:44:23 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:04.423 [2024-07-26 16:44:23.914933] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:04.423 null0 00:39:04.423 [2024-07-26 16:44:23.946982] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:04.423 [2024-07-26 16:44:23.947617] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:04.423 [2024-07-26 16:44:23.954996] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.423 16:44:23 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:04.423 [2024-07-26 16:44:23.962996] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:04.423 request: 00:39:04.423 { 00:39:04.423 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:04.423 "secure_channel": false, 00:39:04.423 "listen_address": { 00:39:04.423 "trtype": "tcp", 00:39:04.423 "traddr": "127.0.0.1", 00:39:04.423 "trsvcid": "4420" 00:39:04.423 }, 00:39:04.423 "method": "nvmf_subsystem_add_listener", 00:39:04.423 "req_id": 1 00:39:04.423 } 00:39:04.423 Got JSON-RPC error response 00:39:04.423 response: 00:39:04.423 { 00:39:04.423 "code": -32602, 00:39:04.423 "message": "Invalid parameters" 00:39:04.423 } 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:04.423 16:44:23 keyring_file -- keyring/file.sh@46 -- # bperfpid=851803 00:39:04.423 16:44:23 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:04.423 16:44:23 keyring_file -- keyring/file.sh@48 -- # waitforlisten 851803 /var/tmp/bperf.sock 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 851803 ']' 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:04.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:04.423 16:44:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:04.423 [2024-07-26 16:44:24.046678] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:04.424 [2024-07-26 16:44:24.046839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851803 ] 00:39:04.424 EAL: No free 2048 kB hugepages reported on node 1 00:39:04.424 [2024-07-26 16:44:24.178237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:04.682 [2024-07-26 16:44:24.427253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:05.248 16:44:24 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:05.248 16:44:24 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:39:05.248 16:44:24 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7Ze9KTniGV 00:39:05.248 16:44:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7Ze9KTniGV 00:39:05.506 16:44:25 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.k2TN2aMugx 00:39:05.506 16:44:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.k2TN2aMugx 00:39:05.790 16:44:25 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:39:05.790 16:44:25 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:39:05.790 16:44:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:05.790 16:44:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.790 16:44:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:06.047 16:44:25 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.7Ze9KTniGV == \/\t\m\p\/\t\m\p\.\7\Z\e\9\K\T\n\i\G\V ]] 00:39:06.047 16:44:25 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:39:06.047 16:44:25 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:06.047 16:44:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:06.047 16:44:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:06.047 16:44:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:06.305 16:44:25 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.k2TN2aMugx == \/\t\m\p\/\t\m\p\.\k\2\T\N\2\a\M\u\g\x ]] 00:39:06.305 16:44:25 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:39:06.305 16:44:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:06.305 16:44:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:06.305 16:44:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:06.305 16:44:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:06.305 16:44:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:06.563 16:44:26 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:39:06.563 16:44:26 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:39:06.563 16:44:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:06.563 16:44:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:06.563 16:44:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:06.563 16:44:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:06.563 16:44:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:06.820 16:44:26 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:06.820 16:44:26 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:06.820 16:44:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:07.077 [2024-07-26 16:44:26.678338] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:07.077 nvme0n1 00:39:07.078 16:44:26 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:39:07.078 16:44:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:07.078 16:44:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:07.078 16:44:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:07.078 16:44:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.078 16:44:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:07.335 16:44:27 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:39:07.335 16:44:27 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:39:07.335 16:44:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:07.335 16:44:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:07.335 16:44:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:07.335 16:44:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.335 16:44:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:07.593 16:44:27 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:39:07.593 16:44:27 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:07.850 Running I/O for 1 seconds... 00:39:08.781 00:39:08.781 Latency(us) 00:39:08.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:08.781 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:08.781 nvme0n1 : 1.03 3434.03 13.41 0.00 0.00 36683.67 5971.06 39807.05 00:39:08.781 =================================================================================================================== 00:39:08.781 Total : 3434.03 13.41 0.00 0.00 36683.67 5971.06 39807.05 00:39:08.781 0 00:39:08.781 16:44:28 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:08.781 16:44:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:09.039 16:44:28 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:39:09.039 16:44:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:09.039 16:44:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:09.039 16:44:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:09.039 16:44:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:09.039 16:44:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:09.296 16:44:28 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:39:09.296 16:44:28 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:39:09.296 16:44:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:09.296 16:44:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:09.296 16:44:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:09.296 16:44:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:09.296 16:44:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:09.553 16:44:29 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:09.553 16:44:29 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:09.553 16:44:29 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:09.554 16:44:29 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:09.554 16:44:29 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:09.554 16:44:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:09.554 16:44:29 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:09.554 16:44:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:09.554 16:44:29 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:09.554 16:44:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:09.811 [2024-07-26 16:44:29.451782] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:09.811 [2024-07-26 16:44:29.452488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (107): Transport endpoint is not connected 00:39:09.811 [2024-07-26 16:44:29.453460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:39:09.811 [2024-07-26 16:44:29.454457] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:09.811 [2024-07-26 16:44:29.454492] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:09.811 [2024-07-26 16:44:29.454525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:09.811 request: 00:39:09.811 { 00:39:09.811 "name": "nvme0", 00:39:09.811 "trtype": "tcp", 00:39:09.811 "traddr": "127.0.0.1", 00:39:09.811 "adrfam": "ipv4", 00:39:09.811 "trsvcid": "4420", 00:39:09.811 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:09.811 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:09.811 "prchk_reftag": false, 00:39:09.811 "prchk_guard": false, 00:39:09.811 "hdgst": false, 00:39:09.811 "ddgst": false, 00:39:09.811 "psk": "key1", 00:39:09.811 "method": "bdev_nvme_attach_controller", 00:39:09.811 "req_id": 1 00:39:09.811 } 00:39:09.811 Got JSON-RPC error response 00:39:09.811 response: 00:39:09.811 { 00:39:09.811 "code": -5, 00:39:09.811 "message": "Input/output error" 00:39:09.811 } 00:39:09.811 16:44:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:09.811 16:44:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:09.811 16:44:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:09.811 16:44:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:09.811 16:44:29 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:39:09.811 16:44:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:09.811 16:44:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:09.811 16:44:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:09.811 16:44:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:09.811 16:44:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:10.068 16:44:29 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:39:10.068 16:44:29 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:39:10.068 16:44:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:10.068 16:44:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:10.068 16:44:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.068 16:44:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.068 16:44:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:10.326 16:44:29 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:10.326 16:44:29 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:39:10.326 16:44:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:10.584 16:44:30 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:39:10.584 16:44:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:10.842 16:44:30 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:39:10.842 16:44:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.842 16:44:30 keyring_file -- keyring/file.sh@77 -- # jq length 00:39:11.100 16:44:30 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:39:11.100 16:44:30 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.7Ze9KTniGV 00:39:11.100 16:44:30 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.7Ze9KTniGV 00:39:11.100 16:44:30 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:11.100 16:44:30 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.7Ze9KTniGV 00:39:11.100 16:44:30 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:11.100 16:44:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:11.100 16:44:30 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:11.100 16:44:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:11.100 16:44:30 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7Ze9KTniGV 00:39:11.100 16:44:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7Ze9KTniGV 00:39:11.358 [2024-07-26 16:44:30.963304] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.7Ze9KTniGV': 0100660 00:39:11.358 [2024-07-26 16:44:30.963368] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:11.358 request: 00:39:11.358 { 00:39:11.358 "name": "key0", 00:39:11.358 "path": "/tmp/tmp.7Ze9KTniGV", 00:39:11.358 "method": "keyring_file_add_key", 00:39:11.358 "req_id": 1 00:39:11.358 } 00:39:11.358 Got JSON-RPC error response 00:39:11.358 response: 00:39:11.358 { 00:39:11.358 "code": -1, 00:39:11.358 "message": "Operation not permitted" 00:39:11.358 } 00:39:11.358 16:44:30 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:11.358 16:44:30 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:11.358 16:44:30 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:11.358 16:44:30 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:11.358 16:44:30 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.7Ze9KTniGV 00:39:11.358 16:44:30 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7Ze9KTniGV 00:39:11.358 16:44:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7Ze9KTniGV 00:39:11.616 16:44:31 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.7Ze9KTniGV 00:39:11.616 16:44:31 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:39:11.616 16:44:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:11.616 16:44:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:11.616 16:44:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:11.616 16:44:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:11.616 16:44:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:11.874 16:44:31 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:39:11.874 16:44:31 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:11.874 16:44:31 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:11.874 16:44:31 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:11.874 16:44:31 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:11.874 16:44:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:11.874 16:44:31 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:11.874 16:44:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:11.874 16:44:31 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:11.874 16:44:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:12.132 [2024-07-26 16:44:31.725574] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.7Ze9KTniGV': No such file or directory 00:39:12.132 [2024-07-26 16:44:31.725635] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:12.132 [2024-07-26 16:44:31.725690] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:12.132 [2024-07-26 16:44:31.725723] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:12.132 [2024-07-26 16:44:31.725744] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:12.132 request: 00:39:12.132 { 00:39:12.132 "name": "nvme0", 00:39:12.132 "trtype": "tcp", 00:39:12.132 "traddr": "127.0.0.1", 00:39:12.132 "adrfam": "ipv4", 00:39:12.132 "trsvcid": "4420", 00:39:12.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:12.132 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:12.132 "prchk_reftag": false, 00:39:12.132 "prchk_guard": false, 00:39:12.132 "hdgst": false, 00:39:12.132 "ddgst": false, 00:39:12.132 "psk": "key0", 00:39:12.132 "method": "bdev_nvme_attach_controller", 00:39:12.132 "req_id": 1 00:39:12.132 } 00:39:12.132 Got JSON-RPC error response 00:39:12.132 response: 00:39:12.132 { 00:39:12.132 "code": -19, 00:39:12.132 "message": "No such device" 00:39:12.132 } 00:39:12.132 16:44:31 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:12.132 16:44:31 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:12.133 16:44:31 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:12.133 16:44:31 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:12.133 16:44:31 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:39:12.133 16:44:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:12.391 16:44:31 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:12.391 16:44:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:12.391 16:44:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:12.391 16:44:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:12.391 16:44:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:12.391 16:44:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:12.391 16:44:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1PHaKSw9Bn 00:39:12.391 16:44:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:12.391 16:44:31 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:12.391 16:44:31 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:12.391 16:44:31 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:12.391 16:44:31 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:12.391 16:44:31 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:12.391 16:44:31 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:12.391 16:44:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1PHaKSw9Bn 00:39:12.391 16:44:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1PHaKSw9Bn 00:39:12.391 16:44:32 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.1PHaKSw9Bn 00:39:12.391 16:44:32 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1PHaKSw9Bn 00:39:12.391 16:44:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1PHaKSw9Bn 00:39:12.649 16:44:32 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:12.649 16:44:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:12.907 nvme0n1 00:39:12.907 16:44:32 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:39:12.907 16:44:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:12.907 16:44:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:12.907 16:44:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:12.907 16:44:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:12.907 16:44:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:13.165 16:44:32 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:39:13.166 16:44:32 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:39:13.166 16:44:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:13.424 16:44:33 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:39:13.424 16:44:33 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:39:13.424 16:44:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:13.424 16:44:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:13.424 16:44:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:13.692 16:44:33 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:39:13.692 16:44:33 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:39:13.692 16:44:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:13.692 16:44:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:13.692 16:44:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:13.692 16:44:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:13.692 16:44:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:13.957 16:44:33 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:39:13.957 16:44:33 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:13.957 16:44:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:14.215 16:44:33 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:39:14.215 16:44:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:14.215 16:44:33 keyring_file -- keyring/file.sh@104 -- # jq length 00:39:14.472 16:44:34 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:39:14.472 16:44:34 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1PHaKSw9Bn 00:39:14.472 16:44:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1PHaKSw9Bn 00:39:14.729 16:44:34 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.k2TN2aMugx 00:39:14.729 16:44:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.k2TN2aMugx 00:39:14.986 16:44:34 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:14.986 16:44:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:15.244 nvme0n1 00:39:15.244 16:44:34 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:39:15.244 16:44:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:15.502 16:44:35 keyring_file -- keyring/file.sh@112 -- # config='{ 00:39:15.502 "subsystems": [ 00:39:15.502 { 00:39:15.502 "subsystem": "keyring", 00:39:15.502 "config": [ 00:39:15.502 { 00:39:15.502 "method": "keyring_file_add_key", 00:39:15.502 "params": { 00:39:15.502 "name": "key0", 00:39:15.502 "path": "/tmp/tmp.1PHaKSw9Bn" 00:39:15.502 } 00:39:15.502 }, 00:39:15.502 { 00:39:15.502 "method": "keyring_file_add_key", 00:39:15.502 "params": { 00:39:15.502 "name": "key1", 00:39:15.502 "path": "/tmp/tmp.k2TN2aMugx" 00:39:15.502 } 00:39:15.502 } 00:39:15.502 ] 00:39:15.502 }, 00:39:15.502 { 00:39:15.502 "subsystem": "iobuf", 00:39:15.502 "config": [ 00:39:15.502 { 00:39:15.502 "method": "iobuf_set_options", 00:39:15.502 "params": { 00:39:15.502 "small_pool_count": 8192, 00:39:15.502 "large_pool_count": 1024, 00:39:15.502 "small_bufsize": 8192, 00:39:15.502 "large_bufsize": 135168 00:39:15.502 } 00:39:15.502 } 00:39:15.502 ] 00:39:15.502 }, 00:39:15.502 { 00:39:15.502 "subsystem": "sock", 00:39:15.502 "config": [ 00:39:15.502 { 00:39:15.502 "method": "sock_set_default_impl", 00:39:15.502 "params": { 00:39:15.502 "impl_name": "posix" 00:39:15.502 } 00:39:15.502 }, 00:39:15.502 { 00:39:15.502 "method": "sock_impl_set_options", 00:39:15.502 "params": { 00:39:15.502 "impl_name": "ssl", 00:39:15.502 "recv_buf_size": 4096, 00:39:15.502 "send_buf_size": 4096, 00:39:15.502 "enable_recv_pipe": true, 00:39:15.502 "enable_quickack": false, 00:39:15.502 "enable_placement_id": 0, 00:39:15.502 "enable_zerocopy_send_server": true, 00:39:15.502 "enable_zerocopy_send_client": false, 00:39:15.502 "zerocopy_threshold": 0, 00:39:15.502 "tls_version": 0, 00:39:15.502 "enable_ktls": false 00:39:15.502 } 00:39:15.502 }, 00:39:15.502 { 00:39:15.502 "method": "sock_impl_set_options", 00:39:15.502 "params": { 00:39:15.502 "impl_name": "posix", 00:39:15.502 "recv_buf_size": 2097152, 00:39:15.502 "send_buf_size": 2097152, 00:39:15.502 "enable_recv_pipe": true, 00:39:15.502 "enable_quickack": false, 00:39:15.502 "enable_placement_id": 0, 00:39:15.502 "enable_zerocopy_send_server": true, 00:39:15.502 "enable_zerocopy_send_client": false, 00:39:15.502 "zerocopy_threshold": 0, 00:39:15.502 "tls_version": 0, 00:39:15.502 "enable_ktls": false 00:39:15.503 } 00:39:15.503 } 00:39:15.503 ] 00:39:15.503 }, 00:39:15.503 { 00:39:15.503 "subsystem": "vmd", 00:39:15.503 "config": [] 00:39:15.503 }, 00:39:15.503 { 00:39:15.503 "subsystem": "accel", 00:39:15.503 "config": [ 00:39:15.503 { 00:39:15.503 "method": "accel_set_options", 00:39:15.503 "params": { 00:39:15.503 "small_cache_size": 128, 00:39:15.503 "large_cache_size": 16, 00:39:15.503 "task_count": 2048, 00:39:15.503 "sequence_count": 2048, 00:39:15.503 "buf_count": 2048 00:39:15.503 } 00:39:15.503 } 00:39:15.503 ] 00:39:15.503 }, 00:39:15.503 { 00:39:15.503 "subsystem": "bdev", 00:39:15.503 "config": [ 00:39:15.503 { 00:39:15.503 "method": "bdev_set_options", 00:39:15.503 "params": { 00:39:15.503 "bdev_io_pool_size": 65535, 00:39:15.503 "bdev_io_cache_size": 256, 00:39:15.503 "bdev_auto_examine": true, 00:39:15.503 "iobuf_small_cache_size": 128, 00:39:15.503 "iobuf_large_cache_size": 16 00:39:15.503 } 00:39:15.503 }, 00:39:15.503 { 00:39:15.503 "method": "bdev_raid_set_options", 00:39:15.503 "params": { 00:39:15.503 "process_window_size_kb": 1024, 00:39:15.503 "process_max_bandwidth_mb_sec": 0 00:39:15.503 } 00:39:15.503 }, 00:39:15.503 { 00:39:15.503 "method": "bdev_iscsi_set_options", 00:39:15.503 "params": { 00:39:15.503 "timeout_sec": 30 00:39:15.503 } 00:39:15.503 }, 00:39:15.503 { 00:39:15.503 "method": "bdev_nvme_set_options", 00:39:15.503 "params": { 00:39:15.503 "action_on_timeout": "none", 00:39:15.503 "timeout_us": 0, 00:39:15.503 "timeout_admin_us": 0, 00:39:15.503 "keep_alive_timeout_ms": 10000, 00:39:15.503 "arbitration_burst": 0, 00:39:15.503 "low_priority_weight": 0, 00:39:15.503 "medium_priority_weight": 0, 00:39:15.503 "high_priority_weight": 0, 00:39:15.503 "nvme_adminq_poll_period_us": 10000, 00:39:15.503 "nvme_ioq_poll_period_us": 0, 00:39:15.503 "io_queue_requests": 512, 00:39:15.503 "delay_cmd_submit": true, 00:39:15.503 "transport_retry_count": 4, 00:39:15.503 "bdev_retry_count": 3, 00:39:15.503 "transport_ack_timeout": 0, 00:39:15.503 "ctrlr_loss_timeout_sec": 0, 00:39:15.503 "reconnect_delay_sec": 0, 00:39:15.503 "fast_io_fail_timeout_sec": 0, 00:39:15.503 "disable_auto_failback": false, 00:39:15.503 "generate_uuids": false, 00:39:15.503 "transport_tos": 0, 00:39:15.503 "nvme_error_stat": false, 00:39:15.503 "rdma_srq_size": 0, 00:39:15.503 "io_path_stat": false, 00:39:15.503 "allow_accel_sequence": false, 00:39:15.503 "rdma_max_cq_size": 0, 00:39:15.503 "rdma_cm_event_timeout_ms": 0, 00:39:15.503 "dhchap_digests": [ 00:39:15.503 "sha256", 00:39:15.503 "sha384", 00:39:15.503 "sha512" 00:39:15.503 ], 00:39:15.503 "dhchap_dhgroups": [ 00:39:15.503 "null", 00:39:15.503 "ffdhe2048", 00:39:15.503 "ffdhe3072", 00:39:15.503 "ffdhe4096", 00:39:15.503 "ffdhe6144", 00:39:15.503 "ffdhe8192" 00:39:15.503 ] 00:39:15.503 } 00:39:15.503 }, 00:39:15.503 { 00:39:15.503 "method": "bdev_nvme_attach_controller", 00:39:15.503 "params": { 00:39:15.503 "name": "nvme0", 00:39:15.503 "trtype": "TCP", 00:39:15.503 "adrfam": "IPv4", 00:39:15.503 "traddr": "127.0.0.1", 00:39:15.503 "trsvcid": "4420", 00:39:15.503 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:15.503 "prchk_reftag": false, 00:39:15.503 "prchk_guard": false, 00:39:15.503 "ctrlr_loss_timeout_sec": 0, 00:39:15.503 "reconnect_delay_sec": 0, 00:39:15.503 "fast_io_fail_timeout_sec": 0, 00:39:15.503 "psk": "key0", 00:39:15.503 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:15.503 "hdgst": false, 00:39:15.503 "ddgst": false 00:39:15.503 } 00:39:15.503 }, 00:39:15.503 { 00:39:15.503 "method": "bdev_nvme_set_hotplug", 00:39:15.503 "params": { 00:39:15.503 "period_us": 100000, 00:39:15.503 "enable": false 00:39:15.503 } 00:39:15.503 }, 00:39:15.503 { 00:39:15.503 "method": "bdev_wait_for_examine" 00:39:15.503 } 00:39:15.503 ] 00:39:15.503 }, 00:39:15.503 { 00:39:15.503 "subsystem": "nbd", 00:39:15.503 "config": [] 00:39:15.503 } 00:39:15.503 ] 00:39:15.503 }' 00:39:15.503 16:44:35 keyring_file -- keyring/file.sh@114 -- # killprocess 851803 00:39:15.503 16:44:35 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 851803 ']' 00:39:15.503 16:44:35 keyring_file -- common/autotest_common.sh@954 -- # kill -0 851803 00:39:15.503 16:44:35 keyring_file -- common/autotest_common.sh@955 -- # uname 00:39:15.503 16:44:35 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:15.503 16:44:35 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 851803 00:39:15.761 16:44:35 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:15.761 16:44:35 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:15.761 16:44:35 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 851803' 00:39:15.761 killing process with pid 851803 00:39:15.761 16:44:35 keyring_file -- common/autotest_common.sh@969 -- # kill 851803 00:39:15.761 Received shutdown signal, test time was about 1.000000 seconds 00:39:15.761 00:39:15.761 Latency(us) 00:39:15.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:15.761 =================================================================================================================== 00:39:15.761 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:15.761 16:44:35 keyring_file -- common/autotest_common.sh@974 -- # wait 851803 00:39:16.695 16:44:36 keyring_file -- keyring/file.sh@117 -- # bperfpid=853399 00:39:16.695 16:44:36 keyring_file -- keyring/file.sh@119 -- # waitforlisten 853399 /var/tmp/bperf.sock 00:39:16.695 16:44:36 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 853399 ']' 00:39:16.695 16:44:36 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:16.695 16:44:36 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:16.695 16:44:36 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:16.695 16:44:36 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:16.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:16.695 16:44:36 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:39:16.695 "subsystems": [ 00:39:16.695 { 00:39:16.695 "subsystem": "keyring", 00:39:16.695 "config": [ 00:39:16.695 { 00:39:16.695 "method": "keyring_file_add_key", 00:39:16.695 "params": { 00:39:16.695 "name": "key0", 00:39:16.695 "path": "/tmp/tmp.1PHaKSw9Bn" 00:39:16.695 } 00:39:16.695 }, 00:39:16.696 { 00:39:16.696 "method": "keyring_file_add_key", 00:39:16.696 "params": { 00:39:16.696 "name": "key1", 00:39:16.696 "path": "/tmp/tmp.k2TN2aMugx" 00:39:16.696 } 00:39:16.696 } 00:39:16.696 ] 00:39:16.696 }, 00:39:16.696 { 00:39:16.696 "subsystem": "iobuf", 00:39:16.696 "config": [ 00:39:16.696 { 00:39:16.696 "method": "iobuf_set_options", 00:39:16.696 "params": { 00:39:16.696 "small_pool_count": 8192, 00:39:16.696 "large_pool_count": 1024, 00:39:16.696 "small_bufsize": 8192, 00:39:16.696 "large_bufsize": 135168 00:39:16.696 } 00:39:16.696 } 00:39:16.696 ] 00:39:16.696 }, 00:39:16.696 { 00:39:16.696 "subsystem": "sock", 00:39:16.696 "config": [ 00:39:16.696 { 00:39:16.696 "method": "sock_set_default_impl", 00:39:16.696 "params": { 00:39:16.696 "impl_name": "posix" 00:39:16.696 } 00:39:16.696 }, 00:39:16.696 { 00:39:16.696 "method": "sock_impl_set_options", 00:39:16.696 "params": { 00:39:16.696 "impl_name": "ssl", 00:39:16.696 "recv_buf_size": 4096, 00:39:16.696 "send_buf_size": 4096, 00:39:16.696 "enable_recv_pipe": true, 00:39:16.696 "enable_quickack": false, 00:39:16.696 "enable_placement_id": 0, 00:39:16.696 "enable_zerocopy_send_server": true, 00:39:16.696 "enable_zerocopy_send_client": false, 00:39:16.696 "zerocopy_threshold": 0, 00:39:16.696 "tls_version": 0, 00:39:16.696 "enable_ktls": false 00:39:16.696 } 00:39:16.696 }, 00:39:16.696 { 00:39:16.696 "method": "sock_impl_set_options", 00:39:16.696 "params": { 00:39:16.696 "impl_name": "posix", 00:39:16.696 "recv_buf_size": 2097152, 00:39:16.696 "send_buf_size": 2097152, 00:39:16.696 "enable_recv_pipe": true, 00:39:16.696 "enable_quickack": false, 00:39:16.696 "enable_placement_id": 0, 00:39:16.696 "enable_zerocopy_send_server": true, 00:39:16.696 "enable_zerocopy_send_client": false, 00:39:16.696 "zerocopy_threshold": 0, 00:39:16.696 "tls_version": 0, 00:39:16.696 "enable_ktls": false 00:39:16.696 } 00:39:16.696 } 00:39:16.696 ] 00:39:16.696 }, 00:39:16.696 { 00:39:16.696 "subsystem": "vmd", 00:39:16.696 "config": [] 00:39:16.696 }, 00:39:16.696 { 00:39:16.696 "subsystem": "accel", 00:39:16.696 "config": [ 00:39:16.696 { 00:39:16.696 "method": "accel_set_options", 00:39:16.696 "params": { 00:39:16.696 "small_cache_size": 128, 00:39:16.696 "large_cache_size": 16, 00:39:16.696 "task_count": 2048, 00:39:16.696 "sequence_count": 2048, 00:39:16.696 "buf_count": 2048 00:39:16.696 } 00:39:16.696 } 00:39:16.696 ] 00:39:16.696 }, 00:39:16.696 { 00:39:16.696 "subsystem": "bdev", 00:39:16.696 "config": [ 00:39:16.696 { 00:39:16.696 "method": "bdev_set_options", 00:39:16.696 "params": { 00:39:16.696 "bdev_io_pool_size": 65535, 00:39:16.696 "bdev_io_cache_size": 256, 00:39:16.696 "bdev_auto_examine": true, 00:39:16.696 "iobuf_small_cache_size": 128, 00:39:16.696 "iobuf_large_cache_size": 16 00:39:16.696 } 00:39:16.696 }, 00:39:16.696 { 00:39:16.696 "method": "bdev_raid_set_options", 00:39:16.696 "params": { 00:39:16.696 "process_window_size_kb": 1024, 00:39:16.696 "process_max_bandwidth_mb_sec": 0 00:39:16.696 } 00:39:16.696 }, 00:39:16.696 { 00:39:16.696 "method": "bdev_iscsi_set_options", 00:39:16.696 "params": { 00:39:16.696 "timeout_sec": 30 00:39:16.696 } 00:39:16.696 }, 00:39:16.696 { 00:39:16.696 "method": "bdev_nvme_set_options", 00:39:16.696 "params": { 00:39:16.696 "action_on_timeout": "none", 00:39:16.696 "timeout_us": 0, 00:39:16.696 "timeout_admin_us": 0, 00:39:16.696 "keep_alive_timeout_ms": 10000, 00:39:16.696 "arbitration_burst": 0, 00:39:16.696 "low_priority_weight": 0, 00:39:16.696 "medium_priority_weight": 0, 00:39:16.696 "high_priority_weight": 0, 00:39:16.696 "nvme_adminq_poll_period_us": 10000, 00:39:16.696 "nvme_ioq_poll_period_us": 0, 00:39:16.696 "io_queue_requests": 512, 00:39:16.696 "delay_cmd_submit": true, 00:39:16.696 "transport_retry_count": 4, 00:39:16.696 "bdev_retry_count": 3, 00:39:16.696 "transport_ack_timeout": 0, 00:39:16.696 "ctrlr_loss_timeout_sec": 0, 00:39:16.696 "reconnect_delay_sec": 0, 00:39:16.696 "fast_io_fail_timeout_sec": 0, 00:39:16.696 "disable_auto_failback": false, 00:39:16.696 "generate_uuids": false, 00:39:16.696 "transport_tos": 0, 00:39:16.696 "nvme_error_stat": false, 00:39:16.696 "rdma_srq_size": 0, 00:39:16.696 "io_path_stat": false, 00:39:16.696 "allow_accel_sequence": false, 00:39:16.696 "rdma_max_cq_size": 0, 00:39:16.696 "rdma_cm_event_timeout_ms": 0, 00:39:16.696 "dhchap_digests": [ 00:39:16.696 "sha256", 00:39:16.696 "sha384", 00:39:16.696 "sha512" 00:39:16.696 ], 00:39:16.696 "dhchap_dhgroups": [ 00:39:16.696 "null", 00:39:16.696 "ffdhe2048", 00:39:16.696 "ffdhe3072", 00:39:16.696 "ffdhe4096", 00:39:16.696 "ffdhe6144", 00:39:16.696 "ffdhe8192" 00:39:16.696 ] 00:39:16.696 } 00:39:16.696 }, 00:39:16.696 { 00:39:16.696 "method": "bdev_nvme_attach_controller", 00:39:16.696 "params": { 00:39:16.696 "name": "nvme0", 00:39:16.696 "trtype": "TCP", 00:39:16.696 "adrfam": "IPv4", 00:39:16.696 "traddr": "127.0.0.1", 00:39:16.696 "trsvcid": "4420", 00:39:16.696 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:16.696 "prchk_reftag": false, 00:39:16.696 "prchk_guard": false, 00:39:16.696 "ctrlr_loss_timeout_sec": 0, 00:39:16.696 "reconnect_delay_sec": 0, 00:39:16.696 "fast_io_fail_timeout_sec": 0, 00:39:16.696 "psk": "key0", 00:39:16.696 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:16.696 "hdgst": false, 00:39:16.696 "ddgst": false 00:39:16.696 } 00:39:16.696 }, 00:39:16.696 { 00:39:16.696 "method": "bdev_nvme_set_hotplug", 00:39:16.696 "params": { 00:39:16.696 "period_us": 100000, 00:39:16.696 "enable": false 00:39:16.696 } 00:39:16.696 }, 00:39:16.696 { 00:39:16.696 "method": "bdev_wait_for_examine" 00:39:16.696 } 00:39:16.696 ] 00:39:16.696 }, 00:39:16.696 { 00:39:16.696 "subsystem": "nbd", 00:39:16.696 "config": [] 00:39:16.696 } 00:39:16.696 ] 00:39:16.696 }' 00:39:16.696 16:44:36 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:16.696 16:44:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:16.696 [2024-07-26 16:44:36.349719] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:16.696 [2024-07-26 16:44:36.349875] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid853399 ] 00:39:16.696 EAL: No free 2048 kB hugepages reported on node 1 00:39:16.955 [2024-07-26 16:44:36.472368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.955 [2024-07-26 16:44:36.705235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:17.522 [2024-07-26 16:44:37.119453] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:17.522 16:44:37 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:17.522 16:44:37 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:39:17.522 16:44:37 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:39:17.522 16:44:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:17.522 16:44:37 keyring_file -- keyring/file.sh@120 -- # jq length 00:39:17.780 16:44:37 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:39:17.780 16:44:37 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:39:17.780 16:44:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:17.780 16:44:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:17.780 16:44:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:17.780 16:44:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:17.780 16:44:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:18.038 16:44:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:18.038 16:44:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:39:18.038 16:44:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:18.038 16:44:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:18.038 16:44:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:18.038 16:44:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:18.038 16:44:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:18.296 16:44:38 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:39:18.296 16:44:38 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:39:18.296 16:44:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:18.296 16:44:38 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:39:18.554 16:44:38 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:39:18.554 16:44:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:18.554 16:44:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.1PHaKSw9Bn /tmp/tmp.k2TN2aMugx 00:39:18.554 16:44:38 keyring_file -- keyring/file.sh@20 -- # killprocess 853399 00:39:18.554 16:44:38 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 853399 ']' 00:39:18.554 16:44:38 keyring_file -- common/autotest_common.sh@954 -- # kill -0 853399 00:39:18.554 16:44:38 keyring_file -- common/autotest_common.sh@955 -- # uname 00:39:18.554 16:44:38 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:18.554 16:44:38 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 853399 00:39:18.554 16:44:38 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:18.554 16:44:38 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:18.554 16:44:38 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 853399' 00:39:18.554 killing process with pid 853399 00:39:18.554 16:44:38 keyring_file -- common/autotest_common.sh@969 -- # kill 853399 00:39:18.554 Received shutdown signal, test time was about 1.000000 seconds 00:39:18.554 00:39:18.554 Latency(us) 00:39:18.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:18.554 =================================================================================================================== 00:39:18.554 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:18.554 16:44:38 keyring_file -- common/autotest_common.sh@974 -- # wait 853399 00:39:19.946 16:44:39 keyring_file -- keyring/file.sh@21 -- # killprocess 851661 00:39:19.946 16:44:39 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 851661 ']' 00:39:19.946 16:44:39 keyring_file -- common/autotest_common.sh@954 -- # kill -0 851661 00:39:19.946 16:44:39 keyring_file -- common/autotest_common.sh@955 -- # uname 00:39:19.946 16:44:39 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:19.946 16:44:39 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 851661 00:39:19.946 16:44:39 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:19.946 16:44:39 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:19.946 16:44:39 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 851661' 00:39:19.946 killing process with pid 851661 00:39:19.946 16:44:39 keyring_file -- common/autotest_common.sh@969 -- # kill 851661 00:39:19.947 [2024-07-26 16:44:39.376318] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:39:19.947 16:44:39 keyring_file -- common/autotest_common.sh@974 -- # wait 851661 00:39:22.476 00:39:22.476 real 0m19.252s 00:39:22.476 user 0m42.456s 00:39:22.476 sys 0m3.719s 00:39:22.476 16:44:41 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:22.476 16:44:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:22.476 ************************************ 00:39:22.476 END TEST keyring_file 00:39:22.476 ************************************ 00:39:22.476 16:44:41 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:39:22.476 16:44:41 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:22.476 16:44:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:22.476 16:44:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:22.476 16:44:41 -- common/autotest_common.sh@10 -- # set +x 00:39:22.476 ************************************ 00:39:22.476 START TEST keyring_linux 00:39:22.476 ************************************ 00:39:22.476 16:44:41 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:22.476 * Looking for test storage... 00:39:22.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:22.476 16:44:41 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:22.476 16:44:41 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:22.476 16:44:41 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:22.476 16:44:41 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:22.476 16:44:41 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:22.476 16:44:41 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.476 16:44:41 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.476 16:44:41 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.476 16:44:41 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:22.476 16:44:41 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:22.476 16:44:41 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:22.476 16:44:41 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:22.476 16:44:41 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:22.476 16:44:41 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:22.477 16:44:41 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:22.477 16:44:41 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:22.477 16:44:41 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:22.477 16:44:41 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:22.477 16:44:41 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:22.477 16:44:41 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:39:22.477 16:44:41 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:22.477 16:44:41 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:22.477 16:44:41 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:39:22.477 16:44:41 keyring_linux -- nvmf/common.sh@705 -- # python - 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:22.477 /tmp/:spdk-test:key0 00:39:22.477 16:44:41 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:22.477 16:44:41 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:22.477 16:44:41 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:39:22.477 16:44:41 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:22.477 16:44:41 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:39:22.477 16:44:41 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:39:22.477 16:44:41 keyring_linux -- nvmf/common.sh@705 -- # python - 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:22.477 16:44:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:22.477 /tmp/:spdk-test:key1 00:39:22.477 16:44:41 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=854082 00:39:22.477 16:44:41 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:22.477 16:44:41 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 854082 00:39:22.477 16:44:41 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 854082 ']' 00:39:22.477 16:44:41 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:22.477 16:44:41 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:22.477 16:44:41 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:22.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:22.477 16:44:41 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:22.477 16:44:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:22.477 [2024-07-26 16:44:41.993259] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:22.477 [2024-07-26 16:44:41.993457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854082 ] 00:39:22.477 EAL: No free 2048 kB hugepages reported on node 1 00:39:22.477 [2024-07-26 16:44:42.114843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:22.735 [2024-07-26 16:44:42.363413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:23.670 16:44:43 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:23.670 16:44:43 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:39:23.670 16:44:43 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:23.670 16:44:43 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:23.670 16:44:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:23.670 [2024-07-26 16:44:43.248666] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:23.670 null0 00:39:23.670 [2024-07-26 16:44:43.280720] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:23.670 [2024-07-26 16:44:43.281294] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:23.670 16:44:43 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:23.670 16:44:43 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:23.670 778458932 00:39:23.670 16:44:43 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:23.670 491287007 00:39:23.670 16:44:43 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=854302 00:39:23.670 16:44:43 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:23.670 16:44:43 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 854302 /var/tmp/bperf.sock 00:39:23.670 16:44:43 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 854302 ']' 00:39:23.670 16:44:43 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:23.670 16:44:43 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:23.670 16:44:43 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:23.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:23.670 16:44:43 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:23.670 16:44:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:23.670 [2024-07-26 16:44:43.381973] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:23.670 [2024-07-26 16:44:43.382142] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854302 ] 00:39:23.927 EAL: No free 2048 kB hugepages reported on node 1 00:39:23.927 [2024-07-26 16:44:43.513597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.185 [2024-07-26 16:44:43.769249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:24.752 16:44:44 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:24.752 16:44:44 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:39:24.752 16:44:44 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:24.752 16:44:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:25.009 16:44:44 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:25.009 16:44:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:25.575 16:44:45 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:25.575 16:44:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:25.833 [2024-07-26 16:44:45.385390] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:25.833 nvme0n1 00:39:25.833 16:44:45 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:25.833 16:44:45 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:25.833 16:44:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:25.833 16:44:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:25.833 16:44:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:25.833 16:44:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:26.092 16:44:45 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:26.092 16:44:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:26.092 16:44:45 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:26.092 16:44:45 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:26.092 16:44:45 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:26.092 16:44:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:26.092 16:44:45 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:26.350 16:44:45 keyring_linux -- keyring/linux.sh@25 -- # sn=778458932 00:39:26.350 16:44:45 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:26.350 16:44:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:26.350 16:44:45 keyring_linux -- keyring/linux.sh@26 -- # [[ 778458932 == \7\7\8\4\5\8\9\3\2 ]] 00:39:26.350 16:44:45 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 778458932 00:39:26.350 16:44:45 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:26.350 16:44:45 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:26.350 Running I/O for 1 seconds... 00:39:27.726 00:39:27.726 Latency(us) 00:39:27.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:27.726 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:27.726 nvme0n1 : 1.03 3557.82 13.90 0.00 0.00 35537.52 7718.68 40972.14 00:39:27.726 =================================================================================================================== 00:39:27.726 Total : 3557.82 13.90 0.00 0.00 35537.52 7718.68 40972.14 00:39:27.726 0 00:39:27.726 16:44:47 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:27.726 16:44:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:27.726 16:44:47 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:27.726 16:44:47 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:27.726 16:44:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:27.726 16:44:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:27.726 16:44:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:27.726 16:44:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:27.984 16:44:47 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:27.984 16:44:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:27.984 16:44:47 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:27.984 16:44:47 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:27.984 16:44:47 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:39:27.984 16:44:47 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:27.984 16:44:47 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:27.984 16:44:47 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:27.984 16:44:47 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:27.984 16:44:47 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:27.984 16:44:47 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:27.984 16:44:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:28.243 [2024-07-26 16:44:47.879831] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:28.243 [2024-07-26 16:44:47.880419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (107): Transport endpoint is not connected 00:39:28.243 [2024-07-26 16:44:47.881388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (9): Bad file descriptor 00:39:28.243 [2024-07-26 16:44:47.882385] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:28.243 [2024-07-26 16:44:47.882430] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:28.243 [2024-07-26 16:44:47.882461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:28.243 request: 00:39:28.243 { 00:39:28.243 "name": "nvme0", 00:39:28.243 "trtype": "tcp", 00:39:28.243 "traddr": "127.0.0.1", 00:39:28.243 "adrfam": "ipv4", 00:39:28.243 "trsvcid": "4420", 00:39:28.243 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:28.243 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:28.243 "prchk_reftag": false, 00:39:28.243 "prchk_guard": false, 00:39:28.243 "hdgst": false, 00:39:28.243 "ddgst": false, 00:39:28.243 "psk": ":spdk-test:key1", 00:39:28.243 "method": "bdev_nvme_attach_controller", 00:39:28.243 "req_id": 1 00:39:28.243 } 00:39:28.243 Got JSON-RPC error response 00:39:28.243 response: 00:39:28.243 { 00:39:28.243 "code": -5, 00:39:28.243 "message": "Input/output error" 00:39:28.243 } 00:39:28.243 16:44:47 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:39:28.243 16:44:47 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:28.243 16:44:47 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:28.243 16:44:47 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:28.243 16:44:47 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:28.243 16:44:47 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:28.243 16:44:47 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:28.243 16:44:47 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:28.243 16:44:47 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:28.243 16:44:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:28.243 16:44:47 keyring_linux -- keyring/linux.sh@33 -- # sn=778458932 00:39:28.243 16:44:47 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 778458932 00:39:28.243 1 links removed 00:39:28.243 16:44:47 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:28.243 16:44:47 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:28.243 16:44:47 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:28.243 16:44:47 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:28.244 16:44:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:28.244 16:44:47 keyring_linux -- keyring/linux.sh@33 -- # sn=491287007 00:39:28.244 16:44:47 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 491287007 00:39:28.244 1 links removed 00:39:28.244 16:44:47 keyring_linux -- keyring/linux.sh@41 -- # killprocess 854302 00:39:28.244 16:44:47 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 854302 ']' 00:39:28.244 16:44:47 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 854302 00:39:28.244 16:44:47 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:39:28.244 16:44:47 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:28.244 16:44:47 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 854302 00:39:28.244 16:44:47 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:28.244 16:44:47 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:28.244 16:44:47 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 854302' 00:39:28.244 killing process with pid 854302 00:39:28.244 16:44:47 keyring_linux -- common/autotest_common.sh@969 -- # kill 854302 00:39:28.244 Received shutdown signal, test time was about 1.000000 seconds 00:39:28.244 00:39:28.244 Latency(us) 00:39:28.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:28.244 =================================================================================================================== 00:39:28.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:28.244 16:44:47 keyring_linux -- common/autotest_common.sh@974 -- # wait 854302 00:39:29.620 16:44:48 keyring_linux -- keyring/linux.sh@42 -- # killprocess 854082 00:39:29.620 16:44:48 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 854082 ']' 00:39:29.620 16:44:48 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 854082 00:39:29.620 16:44:48 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:39:29.620 16:44:48 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:29.620 16:44:48 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 854082 00:39:29.620 16:44:49 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:29.620 16:44:49 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:29.620 16:44:49 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 854082' 00:39:29.620 killing process with pid 854082 00:39:29.620 16:44:49 keyring_linux -- common/autotest_common.sh@969 -- # kill 854082 00:39:29.620 16:44:49 keyring_linux -- common/autotest_common.sh@974 -- # wait 854082 00:39:32.151 00:39:32.151 real 0m9.589s 00:39:32.151 user 0m15.911s 00:39:32.151 sys 0m1.868s 00:39:32.151 16:44:51 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:32.151 16:44:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:32.151 ************************************ 00:39:32.151 END TEST keyring_linux 00:39:32.151 ************************************ 00:39:32.151 16:44:51 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:39:32.151 16:44:51 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:39:32.151 16:44:51 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:39:32.151 16:44:51 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:39:32.151 16:44:51 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:39:32.151 16:44:51 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:39:32.151 16:44:51 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:39:32.151 16:44:51 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:39:32.151 16:44:51 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:39:32.151 16:44:51 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:39:32.151 16:44:51 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:39:32.151 16:44:51 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:39:32.151 16:44:51 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:39:32.151 16:44:51 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:39:32.151 16:44:51 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:39:32.151 16:44:51 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:39:32.151 16:44:51 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:39:32.151 16:44:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:32.151 16:44:51 -- common/autotest_common.sh@10 -- # set +x 00:39:32.151 16:44:51 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:39:32.151 16:44:51 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:39:32.151 16:44:51 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:39:32.151 16:44:51 -- common/autotest_common.sh@10 -- # set +x 00:39:33.527 INFO: APP EXITING 00:39:33.527 INFO: killing all VMs 00:39:33.527 INFO: killing vhost app 00:39:33.527 INFO: EXIT DONE 00:39:34.459 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:39:34.459 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:39:34.459 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:39:34.459 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:39:34.459 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:39:34.459 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:39:34.459 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:39:34.459 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:39:34.459 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:39:34.459 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:39:34.459 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:39:34.459 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:39:34.459 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:39:34.459 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:39:34.459 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:39:34.717 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:39:34.717 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:39:36.092 Cleaning 00:39:36.092 Removing: /var/run/dpdk/spdk0/config 00:39:36.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:36.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:36.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:36.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:36.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:36.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:36.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:36.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:36.092 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:36.092 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:36.092 Removing: /var/run/dpdk/spdk1/config 00:39:36.092 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:36.092 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:36.092 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:36.092 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:36.092 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:36.092 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:36.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:36.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:36.093 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:36.093 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:36.093 Removing: /var/run/dpdk/spdk1/mp_socket 00:39:36.093 Removing: /var/run/dpdk/spdk2/config 00:39:36.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:36.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:36.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:36.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:36.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:36.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:36.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:36.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:36.093 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:36.093 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:36.093 Removing: /var/run/dpdk/spdk3/config 00:39:36.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:36.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:36.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:36.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:36.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:36.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:36.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:36.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:36.093 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:36.093 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:36.093 Removing: /var/run/dpdk/spdk4/config 00:39:36.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:36.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:36.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:36.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:36.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:36.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:36.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:36.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:36.093 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:36.093 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:36.093 Removing: /dev/shm/bdev_svc_trace.1 00:39:36.093 Removing: /dev/shm/nvmf_trace.0 00:39:36.093 Removing: /dev/shm/spdk_tgt_trace.pid513589 00:39:36.093 Removing: /var/run/dpdk/spdk0 00:39:36.093 Removing: /var/run/dpdk/spdk1 00:39:36.093 Removing: /var/run/dpdk/spdk2 00:39:36.093 Removing: /var/run/dpdk/spdk3 00:39:36.093 Removing: /var/run/dpdk/spdk4 00:39:36.093 Removing: /var/run/dpdk/spdk_pid510725 00:39:36.093 Removing: /var/run/dpdk/spdk_pid511851 00:39:36.093 Removing: /var/run/dpdk/spdk_pid513589 00:39:36.093 Removing: /var/run/dpdk/spdk_pid514311 00:39:36.093 Removing: /var/run/dpdk/spdk_pid515264 00:39:36.093 Removing: /var/run/dpdk/spdk_pid515682 00:39:36.093 Removing: /var/run/dpdk/spdk_pid516660 00:39:36.093 Removing: /var/run/dpdk/spdk_pid516815 00:39:36.093 Removing: /var/run/dpdk/spdk_pid517437 00:39:36.093 Removing: /var/run/dpdk/spdk_pid518896 00:39:36.093 Removing: /var/run/dpdk/spdk_pid520076 00:39:36.093 Removing: /var/run/dpdk/spdk_pid520671 00:39:36.093 Removing: /var/run/dpdk/spdk_pid521250 00:39:36.093 Removing: /var/run/dpdk/spdk_pid521847 00:39:36.093 Removing: /var/run/dpdk/spdk_pid522322 00:39:36.093 Removing: /var/run/dpdk/spdk_pid522602 00:39:36.093 Removing: /var/run/dpdk/spdk_pid522886 00:39:36.093 Removing: /var/run/dpdk/spdk_pid523136 00:39:36.093 Removing: /var/run/dpdk/spdk_pid523528 00:39:36.093 Removing: /var/run/dpdk/spdk_pid526268 00:39:36.093 Removing: /var/run/dpdk/spdk_pid526708 00:39:36.093 Removing: /var/run/dpdk/spdk_pid527264 00:39:36.093 Removing: /var/run/dpdk/spdk_pid527411 00:39:36.093 Removing: /var/run/dpdk/spdk_pid528884 00:39:36.093 Removing: /var/run/dpdk/spdk_pid529136 00:39:36.093 Removing: /var/run/dpdk/spdk_pid530890 00:39:36.093 Removing: /var/run/dpdk/spdk_pid531031 00:39:36.093 Removing: /var/run/dpdk/spdk_pid531582 00:39:36.093 Removing: /var/run/dpdk/spdk_pid531734 00:39:36.093 Removing: /var/run/dpdk/spdk_pid532157 00:39:36.093 Removing: /var/run/dpdk/spdk_pid532305 00:39:36.093 Removing: /var/run/dpdk/spdk_pid533336 00:39:36.093 Removing: /var/run/dpdk/spdk_pid533625 00:39:36.093 Removing: /var/run/dpdk/spdk_pid533945 00:39:36.093 Removing: /var/run/dpdk/spdk_pid536425 00:39:36.093 Removing: /var/run/dpdk/spdk_pid539184 00:39:36.093 Removing: /var/run/dpdk/spdk_pid546183 00:39:36.093 Removing: /var/run/dpdk/spdk_pid546707 00:39:36.093 Removing: /var/run/dpdk/spdk_pid549363 00:39:36.093 Removing: /var/run/dpdk/spdk_pid549648 00:39:36.093 Removing: /var/run/dpdk/spdk_pid552558 00:39:36.093 Removing: /var/run/dpdk/spdk_pid556526 00:39:36.093 Removing: /var/run/dpdk/spdk_pid558848 00:39:36.093 Removing: /var/run/dpdk/spdk_pid566678 00:39:36.093 Removing: /var/run/dpdk/spdk_pid572285 00:39:36.093 Removing: /var/run/dpdk/spdk_pid573736 00:39:36.093 Removing: /var/run/dpdk/spdk_pid574552 00:39:36.093 Removing: /var/run/dpdk/spdk_pid585685 00:39:36.093 Removing: /var/run/dpdk/spdk_pid588249 00:39:36.093 Removing: /var/run/dpdk/spdk_pid644346 00:39:36.093 Removing: /var/run/dpdk/spdk_pid647769 00:39:36.093 Removing: /var/run/dpdk/spdk_pid651981 00:39:36.093 Removing: /var/run/dpdk/spdk_pid658279 00:39:36.093 Removing: /var/run/dpdk/spdk_pid684512 00:39:36.093 Removing: /var/run/dpdk/spdk_pid687553 00:39:36.093 Removing: /var/run/dpdk/spdk_pid688731 00:39:36.093 Removing: /var/run/dpdk/spdk_pid690183 00:39:36.093 Removing: /var/run/dpdk/spdk_pid690457 00:39:36.093 Removing: /var/run/dpdk/spdk_pid690728 00:39:36.093 Removing: /var/run/dpdk/spdk_pid691009 00:39:36.093 Removing: /var/run/dpdk/spdk_pid691718 00:39:36.093 Removing: /var/run/dpdk/spdk_pid693179 00:39:36.093 Removing: /var/run/dpdk/spdk_pid694548 00:39:36.093 Removing: /var/run/dpdk/spdk_pid695128 00:39:36.093 Removing: /var/run/dpdk/spdk_pid697121 00:39:36.093 Removing: /var/run/dpdk/spdk_pid697949 00:39:36.093 Removing: /var/run/dpdk/spdk_pid698706 00:39:36.093 Removing: /var/run/dpdk/spdk_pid701437 00:39:36.093 Removing: /var/run/dpdk/spdk_pid705094 00:39:36.093 Removing: /var/run/dpdk/spdk_pid709361 00:39:36.093 Removing: /var/run/dpdk/spdk_pid733264 00:39:36.093 Removing: /var/run/dpdk/spdk_pid736812 00:39:36.093 Removing: /var/run/dpdk/spdk_pid740844 00:39:36.093 Removing: /var/run/dpdk/spdk_pid742440 00:39:36.093 Removing: /var/run/dpdk/spdk_pid744073 00:39:36.093 Removing: /var/run/dpdk/spdk_pid747167 00:39:36.093 Removing: /var/run/dpdk/spdk_pid749807 00:39:36.093 Removing: /var/run/dpdk/spdk_pid754406 00:39:36.093 Removing: /var/run/dpdk/spdk_pid754531 00:39:36.093 Removing: /var/run/dpdk/spdk_pid757560 00:39:36.093 Removing: /var/run/dpdk/spdk_pid757706 00:39:36.093 Removing: /var/run/dpdk/spdk_pid757954 00:39:36.093 Removing: /var/run/dpdk/spdk_pid758229 00:39:36.093 Removing: /var/run/dpdk/spdk_pid758238 00:39:36.093 Removing: /var/run/dpdk/spdk_pid759435 00:39:36.093 Removing: /var/run/dpdk/spdk_pid760610 00:39:36.093 Removing: /var/run/dpdk/spdk_pid761787 00:39:36.093 Removing: /var/run/dpdk/spdk_pid762970 00:39:36.093 Removing: /var/run/dpdk/spdk_pid764254 00:39:36.093 Removing: /var/run/dpdk/spdk_pid765911 00:39:36.093 Removing: /var/run/dpdk/spdk_pid769988 00:39:36.093 Removing: /var/run/dpdk/spdk_pid770439 00:39:36.093 Removing: /var/run/dpdk/spdk_pid771761 00:39:36.093 Removing: /var/run/dpdk/spdk_pid772569 00:39:36.093 Removing: /var/run/dpdk/spdk_pid776540 00:39:36.093 Removing: /var/run/dpdk/spdk_pid778667 00:39:36.093 Removing: /var/run/dpdk/spdk_pid782469 00:39:36.093 Removing: /var/run/dpdk/spdk_pid786317 00:39:36.093 Removing: /var/run/dpdk/spdk_pid792920 00:39:36.093 Removing: /var/run/dpdk/spdk_pid798274 00:39:36.093 Removing: /var/run/dpdk/spdk_pid798277 00:39:36.093 Removing: /var/run/dpdk/spdk_pid810739 00:39:36.093 Removing: /var/run/dpdk/spdk_pid811411 00:39:36.093 Removing: /var/run/dpdk/spdk_pid811962 00:39:36.093 Removing: /var/run/dpdk/spdk_pid812614 00:39:36.093 Removing: /var/run/dpdk/spdk_pid813714 00:39:36.093 Removing: /var/run/dpdk/spdk_pid814265 00:39:36.093 Removing: /var/run/dpdk/spdk_pid814925 00:39:36.093 Removing: /var/run/dpdk/spdk_pid815469 00:39:36.093 Removing: /var/run/dpdk/spdk_pid818350 00:39:36.093 Removing: /var/run/dpdk/spdk_pid818627 00:39:36.093 Removing: /var/run/dpdk/spdk_pid822673 00:39:36.093 Removing: /var/run/dpdk/spdk_pid822856 00:39:36.093 Removing: /var/run/dpdk/spdk_pid824711 00:39:36.093 Removing: /var/run/dpdk/spdk_pid830641 00:39:36.093 Removing: /var/run/dpdk/spdk_pid830764 00:39:36.093 Removing: /var/run/dpdk/spdk_pid833804 00:39:36.093 Removing: /var/run/dpdk/spdk_pid835324 00:39:36.093 Removing: /var/run/dpdk/spdk_pid836839 00:39:36.093 Removing: /var/run/dpdk/spdk_pid837820 00:39:36.093 Removing: /var/run/dpdk/spdk_pid839341 00:39:36.093 Removing: /var/run/dpdk/spdk_pid840225 00:39:36.093 Removing: /var/run/dpdk/spdk_pid845873 00:39:36.093 Removing: /var/run/dpdk/spdk_pid846261 00:39:36.093 Removing: /var/run/dpdk/spdk_pid846652 00:39:36.093 Removing: /var/run/dpdk/spdk_pid848547 00:39:36.093 Removing: /var/run/dpdk/spdk_pid848892 00:39:36.093 Removing: /var/run/dpdk/spdk_pid849221 00:39:36.093 Removing: /var/run/dpdk/spdk_pid851661 00:39:36.093 Removing: /var/run/dpdk/spdk_pid851803 00:39:36.093 Removing: /var/run/dpdk/spdk_pid853399 00:39:36.093 Removing: /var/run/dpdk/spdk_pid854082 00:39:36.093 Removing: /var/run/dpdk/spdk_pid854302 00:39:36.093 Clean 00:39:36.352 16:44:55 -- common/autotest_common.sh@1451 -- # return 0 00:39:36.352 16:44:55 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:39:36.352 16:44:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:36.352 16:44:55 -- common/autotest_common.sh@10 -- # set +x 00:39:36.352 16:44:55 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:39:36.352 16:44:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:36.352 16:44:55 -- common/autotest_common.sh@10 -- # set +x 00:39:36.352 16:44:55 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:36.352 16:44:55 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:36.352 16:44:55 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:36.352 16:44:55 -- spdk/autotest.sh@395 -- # hash lcov 00:39:36.352 16:44:55 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:39:36.352 16:44:55 -- spdk/autotest.sh@397 -- # hostname 00:39:36.352 16:44:55 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:36.610 geninfo: WARNING: invalid characters removed from testname! 00:40:08.705 16:45:23 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:08.705 16:45:27 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:11.233 16:45:30 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:13.760 16:45:33 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:17.036 16:45:36 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:19.565 16:45:38 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:22.848 16:45:42 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:22.848 16:45:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:22.848 16:45:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:40:22.848 16:45:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:22.848 16:45:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:22.848 16:45:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.848 16:45:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.848 16:45:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.848 16:45:42 -- paths/export.sh@5 -- $ export PATH 00:40:22.848 16:45:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.848 16:45:42 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:40:22.848 16:45:42 -- common/autobuild_common.sh@447 -- $ date +%s 00:40:22.848 16:45:42 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1722005142.XXXXXX 00:40:22.848 16:45:42 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1722005142.K06Frk 00:40:22.848 16:45:42 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:40:22.848 16:45:42 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:40:22.848 16:45:42 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:40:22.848 16:45:42 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:40:22.848 16:45:42 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:40:22.848 16:45:42 -- common/autobuild_common.sh@463 -- $ get_config_params 00:40:22.848 16:45:42 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:40:22.848 16:45:42 -- common/autotest_common.sh@10 -- $ set +x 00:40:22.848 16:45:42 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:40:22.848 16:45:42 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:40:22.848 16:45:42 -- pm/common@17 -- $ local monitor 00:40:22.849 16:45:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:22.849 16:45:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:22.849 16:45:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:22.849 16:45:42 -- pm/common@21 -- $ date +%s 00:40:22.849 16:45:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:22.849 16:45:42 -- pm/common@21 -- $ date +%s 00:40:22.849 16:45:42 -- pm/common@25 -- $ sleep 1 00:40:22.849 16:45:42 -- pm/common@21 -- $ date +%s 00:40:22.849 16:45:42 -- pm/common@21 -- $ date +%s 00:40:22.849 16:45:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722005142 00:40:22.849 16:45:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722005142 00:40:22.849 16:45:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722005142 00:40:22.849 16:45:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722005142 00:40:23.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722005142_collect-vmstat.pm.log 00:40:23.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722005142_collect-cpu-load.pm.log 00:40:23.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722005142_collect-cpu-temp.pm.log 00:40:23.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722005142_collect-bmc-pm.bmc.pm.log 00:40:24.045 16:45:43 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:40:24.045 16:45:43 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:40:24.045 16:45:43 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:24.045 16:45:43 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:40:24.045 16:45:43 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:40:24.045 16:45:43 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:40:24.045 16:45:43 -- spdk/autopackage.sh@19 -- $ timing_finish 00:40:24.045 16:45:43 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:24.045 16:45:43 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:40:24.045 16:45:43 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:24.045 16:45:43 -- spdk/autopackage.sh@20 -- $ exit 0 00:40:24.045 16:45:43 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:40:24.045 16:45:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:40:24.045 16:45:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:40:24.045 16:45:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:24.045 16:45:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:40:24.045 16:45:43 -- pm/common@44 -- $ pid=867112 00:40:24.045 16:45:43 -- pm/common@50 -- $ kill -TERM 867112 00:40:24.045 16:45:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:24.045 16:45:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:40:24.045 16:45:43 -- pm/common@44 -- $ pid=867114 00:40:24.045 16:45:43 -- pm/common@50 -- $ kill -TERM 867114 00:40:24.045 16:45:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:24.045 16:45:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:40:24.045 16:45:43 -- pm/common@44 -- $ pid=867116 00:40:24.045 16:45:43 -- pm/common@50 -- $ kill -TERM 867116 00:40:24.045 16:45:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:24.045 16:45:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:40:24.045 16:45:43 -- pm/common@44 -- $ pid=867147 00:40:24.045 16:45:43 -- pm/common@50 -- $ sudo -E kill -TERM 867147 00:40:24.045 + [[ -n 424411 ]] 00:40:24.045 + sudo kill 424411 00:40:24.055 [Pipeline] } 00:40:24.071 [Pipeline] // stage 00:40:24.075 [Pipeline] } 00:40:24.094 [Pipeline] // timeout 00:40:24.098 [Pipeline] } 00:40:24.113 [Pipeline] // catchError 00:40:24.117 [Pipeline] } 00:40:24.132 [Pipeline] // wrap 00:40:24.136 [Pipeline] } 00:40:24.149 [Pipeline] // catchError 00:40:24.158 [Pipeline] stage 00:40:24.160 [Pipeline] { (Epilogue) 00:40:24.173 [Pipeline] catchError 00:40:24.174 [Pipeline] { 00:40:24.188 [Pipeline] echo 00:40:24.189 Cleanup processes 00:40:24.195 [Pipeline] sh 00:40:24.518 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:24.518 867249 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:40:24.518 867378 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:24.536 [Pipeline] sh 00:40:24.814 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:24.814 ++ grep -v 'sudo pgrep' 00:40:24.814 ++ awk '{print $1}' 00:40:24.814 + sudo kill -9 867249 00:40:24.825 [Pipeline] sh 00:40:25.105 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:37.312 [Pipeline] sh 00:40:37.595 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:37.595 Artifacts sizes are good 00:40:37.610 [Pipeline] archiveArtifacts 00:40:37.617 Archiving artifacts 00:40:37.850 [Pipeline] sh 00:40:38.134 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:38.148 [Pipeline] cleanWs 00:40:38.159 [WS-CLEANUP] Deleting project workspace... 00:40:38.159 [WS-CLEANUP] Deferred wipeout is used... 00:40:38.166 [WS-CLEANUP] done 00:40:38.168 [Pipeline] } 00:40:38.188 [Pipeline] // catchError 00:40:38.201 [Pipeline] sh 00:40:38.482 + logger -p user.info -t JENKINS-CI 00:40:38.490 [Pipeline] } 00:40:38.507 [Pipeline] // stage 00:40:38.513 [Pipeline] } 00:40:38.530 [Pipeline] // node 00:40:38.537 [Pipeline] End of Pipeline 00:40:38.576 Finished: SUCCESS